sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3a9fbd40e7e3f7e4d13a7dbeebb4e9f05c64f378 | # AISBOM - AI Software Bill of Materials
[JSON Spec for Transparency Obligations of the EU AI Act](https://huggingface.co/datasets/AdrianGonzalezSanchez/AISBOM/blob/main/AISBOM_spec.json), including LLM / foundation models
Version 0.1 (December 11, 2023)
> [!NOTE]
> - This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52).
> - The file is an illustrative example as the basis for discussion and feedback.
> - To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).
> - The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.
## Call to action
- Please share your feedback in [Hugging Face Discussions](https://huggingface.co/datasets/AdrianGonzalezSanchez/AISBOM/discussions).
- See the call for contributions at the end of this document.
## How to cite this work
[@AdrianGonzalezSanchez](https://huggingface.co/AdrianGonzalezSanchez) (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). [Hugging Face](https://huggingface.co/datasets/AdrianGonzalezSanchez/AISBOM)
## Overview
EU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties
BOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.
## Purpose of the AI-SBOM Transparency
Collecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:
- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).
- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.
- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.
- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.
## Target group of the AI-BOM
AI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.
## What is the scope of Article 13 AI Act? [EU Parlaments Proposal]
Article 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:
**Article 13 (1)**: The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.
**Article 13 (2)** Requires that the high-risk AI System shall be accompanied by **instructions for** use** [Like a “**(Digital) User Manual**”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.
**Article 13 (3)** Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.
Thus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.
## What is the scope of Article 52 AI Act? [EU Parlaments Proposal]
Article 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways:
(i) **Article 52 (1)**: If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“**Interaction Transparency**”].
(ii) **Article 52 (2)**: If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“**Consent Transparency**”].
(iii) **Article 52 (3)**: If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“**Content Transparency**”].
Notably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition.
## Contributing
This draft is understood as a “living paper” mapping the state of an ongoing discussion and open for feedback. We invite all stakeholders to share their insights and suggestions to enhance the tool's effectiveness and compliance capabilities. Please consider our notes for feedback and discussion.
**Note #1**: This AI-SBOM Transparency is for discussion purposes and does not constitute legal advice. It is essential to consult with legal experts to ensure full compliance with the AI Act.
**Note #2**: We mainly worked with the proposal of the EU Parlament. The final text of the AI Act is still unknown. Also, any standards for Article 13 and Article 52 are under development and not published at the moment. The AI-SBOM is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.
**Note #3**: Recognizing the variety of stakeholders involved in the AI lifecycle, each possessing varying degrees of technical know-how, we understand that transparency is not a one-size-fits-all attribute. AI systems should offer tailored transparency across the AI value chain, catering to the unique needs and perspectives of each stakeholder. This calls for a collaborative effort among all parties involved to ensure effective transparency."
**Note #4**: Please be aware that transparency has an intense tension (especially proprietary AI Systems) with **Data Privacy** (access/description to training data) and **IP/trade secrets** (access/description to the model) and **Cyber Security** (access/description to training data + the model vulnerabilities) - [altogether “Sensitive Information”] | AdrianGonzalezSanchez/AISBOM | [
"language:en",
"license:mit",
"AI Act",
"AI",
"Regulation",
"EU",
"GDPR",
"RAI",
"Ethics",
"region:us"
] | 2023-12-10T14:48:04+00:00 | {"language": ["en"], "license": "mit", "tags": ["AI Act", "AI", "Regulation", "EU", "GDPR", "RAI", "Ethics"]} | 2024-01-18T07:53:15+00:00 | [] | [
"en"
] | TAGS
#language-English #license-mit #AI Act #AI #Regulation #EU #GDPR #RAI #Ethics #region-us
| # AISBOM - AI Software Bill of Materials
JSON Spec for Transparency Obligations of the EU AI Act, including LLM / foundation models
Version 0.1 (December 11, 2023)
> [!NOTE]
> - This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52).
> - The file is an illustrative example as the basis for discussion and feedback.
> - To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).
> - The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.
## Call to action
- Please share your feedback in Hugging Face Discussions.
- See the call for contributions at the end of this document.
## How to cite this work
@AdrianGonzalezSanchez (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). Hugging Face
## Overview
EU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties
BOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.
## Purpose of the AI-SBOM Transparency
Collecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:
- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).
- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.
- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.
- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.
## Target group of the AI-BOM
AI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.
## What is the scope of Article 13 AI Act? [EU Parlaments Proposal]
Article 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:
Article 13 (1): The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.
Article 13 (2) Requires that the high-risk AI System shall be accompanied by instructions for use [Like a “(Digital) User Manual”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.
Article 13 (3) Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.
Thus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.
## What is the scope of Article 52 AI Act? [EU Parlaments Proposal]
Article 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways:
(i) Article 52 (1): If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“Interaction Transparency”].
(ii) Article 52 (2): If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“Consent Transparency”].
(iii) Article 52 (3): If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“Content Transparency”].
Notably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition.
## Contributing
This draft is understood as a “living paper” mapping the state of an ongoing discussion and open for feedback. We invite all stakeholders to share their insights and suggestions to enhance the tool's effectiveness and compliance capabilities. Please consider our notes for feedback and discussion.
Note #1: This AI-SBOM Transparency is for discussion purposes and does not constitute legal advice. It is essential to consult with legal experts to ensure full compliance with the AI Act.
Note #2: We mainly worked with the proposal of the EU Parlament. The final text of the AI Act is still unknown. Also, any standards for Article 13 and Article 52 are under development and not published at the moment. The AI-SBOM is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.
Note #3: Recognizing the variety of stakeholders involved in the AI lifecycle, each possessing varying degrees of technical know-how, we understand that transparency is not a one-size-fits-all attribute. AI systems should offer tailored transparency across the AI value chain, catering to the unique needs and perspectives of each stakeholder. This calls for a collaborative effort among all parties involved to ensure effective transparency."
Note #4: Please be aware that transparency has an intense tension (especially proprietary AI Systems) with Data Privacy (access/description to training data) and IP/trade secrets (access/description to the model) and Cyber Security (access/description to training data + the model vulnerabilities) - [altogether “Sensitive Information”] | [
"# AISBOM - AI Software Bill of Materials\n\nJSON Spec for Transparency Obligations of the EU AI Act, including LLM / foundation models\n\nVersion 0.1 (December 11, 2023)\n\n> [!NOTE]\n> - This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52). \n> - The file is an illustrative example as the basis for discussion and feedback.\n> - To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).\n> - The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.",
"## Call to action\n\n- Please share your feedback in Hugging Face Discussions.\n- See the call for contributions at the end of this document.",
"## How to cite this work\n\n@AdrianGonzalezSanchez (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). Hugging Face",
"## Overview\n\nEU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties\n\nBOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.",
"## Purpose of the AI-SBOM Transparency\n\nCollecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:\n\n- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).\n- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.\n- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.\n- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.",
"## Target group of the AI-BOM\n\nAI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.",
"## What is the scope of Article 13 AI Act? [EU Parlaments Proposal]\n\nArticle 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:\n\nArticle 13 (1): The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.\n\nArticle 13 (2) Requires that the high-risk AI System shall be accompanied by instructions for use [Like a “(Digital) User Manual”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.\n\nArticle 13 (3) Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.\n\nThus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.",
"## What is the scope of Article 52 AI Act? [EU Parlaments Proposal]\nArticle 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways: \n\n(i) Article 52 (1): If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“Interaction Transparency”]. \n\n(ii) Article 52 (2): If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“Consent Transparency”]. \n\n(iii) Article 52 (3): If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“Content Transparency”].\n\nNotably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition.",
"## Contributing\n\nThis draft is understood as a “living paper” mapping the state of an ongoing discussion and open for feedback. We invite all stakeholders to share their insights and suggestions to enhance the tool's effectiveness and compliance capabilities. Please consider our notes for feedback and discussion.\n\nNote #1: This AI-SBOM Transparency is for discussion purposes and does not constitute legal advice. It is essential to consult with legal experts to ensure full compliance with the AI Act. \n\nNote #2: We mainly worked with the proposal of the EU Parlament. The final text of the AI Act is still unknown. Also, any standards for Article 13 and Article 52 are under development and not published at the moment. The AI-SBOM is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.\n\nNote #3: Recognizing the variety of stakeholders involved in the AI lifecycle, each possessing varying degrees of technical know-how, we understand that transparency is not a one-size-fits-all attribute. AI systems should offer tailored transparency across the AI value chain, catering to the unique needs and perspectives of each stakeholder. This calls for a collaborative effort among all parties involved to ensure effective transparency.\"\n\nNote #4: Please be aware that transparency has an intense tension (especially proprietary AI Systems) with Data Privacy (access/description to training data) and IP/trade secrets (access/description to the model) and Cyber Security (access/description to training data + the model vulnerabilities) - [altogether “Sensitive Information”]"
] | [
"TAGS\n#language-English #license-mit #AI Act #AI #Regulation #EU #GDPR #RAI #Ethics #region-us \n",
"# AISBOM - AI Software Bill of Materials\n\nJSON Spec for Transparency Obligations of the EU AI Act, including LLM / foundation models\n\nVersion 0.1 (December 11, 2023)\n\n> [!NOTE]\n> - This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52). \n> - The file is an illustrative example as the basis for discussion and feedback.\n> - To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).\n> - The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.",
"## Call to action\n\n- Please share your feedback in Hugging Face Discussions.\n- See the call for contributions at the end of this document.",
"## How to cite this work\n\n@AdrianGonzalezSanchez (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). Hugging Face",
"## Overview\n\nEU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties\n\nBOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.",
"## Purpose of the AI-SBOM Transparency\n\nCollecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:\n\n- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).\n- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.\n- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.\n- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.",
"## Target group of the AI-BOM\n\nAI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.",
"## What is the scope of Article 13 AI Act? [EU Parlaments Proposal]\n\nArticle 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:\n\nArticle 13 (1): The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.\n\nArticle 13 (2) Requires that the high-risk AI System shall be accompanied by instructions for use [Like a “(Digital) User Manual”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.\n\nArticle 13 (3) Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.\n\nThus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.",
"## What is the scope of Article 52 AI Act? [EU Parlaments Proposal]\nArticle 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways: \n\n(i) Article 52 (1): If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“Interaction Transparency”]. \n\n(ii) Article 52 (2): If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“Consent Transparency”]. \n\n(iii) Article 52 (3): If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“Content Transparency”].\n\nNotably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition.",
"## Contributing\n\nThis draft is understood as a “living paper” mapping the state of an ongoing discussion and open for feedback. We invite all stakeholders to share their insights and suggestions to enhance the tool's effectiveness and compliance capabilities. Please consider our notes for feedback and discussion.\n\nNote #1: This AI-SBOM Transparency is for discussion purposes and does not constitute legal advice. It is essential to consult with legal experts to ensure full compliance with the AI Act. \n\nNote #2: We mainly worked with the proposal of the EU Parlament. The final text of the AI Act is still unknown. Also, any standards for Article 13 and Article 52 are under development and not published at the moment. The AI-SBOM is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.\n\nNote #3: Recognizing the variety of stakeholders involved in the AI lifecycle, each possessing varying degrees of technical know-how, we understand that transparency is not a one-size-fits-all attribute. AI systems should offer tailored transparency across the AI value chain, catering to the unique needs and perspectives of each stakeholder. This calls for a collaborative effort among all parties involved to ensure effective transparency.\"\n\nNote #4: Please be aware that transparency has an intense tension (especially proprietary AI Systems) with Data Privacy (access/description to training data) and IP/trade secrets (access/description to the model) and Cyber Security (access/description to training data + the model vulnerabilities) - [altogether “Sensitive Information”]"
] | [
33,
163,
30,
60,
91,
270,
94,
516,
315,
362
] | [
"passage: TAGS\n#language-English #license-mit #AI Act #AI #Regulation #EU #GDPR #RAI #Ethics #region-us \n# AISBOM - AI Software Bill of Materials\n\nJSON Spec for Transparency Obligations of the EU AI Act, including LLM / foundation models\n\nVersion 0.1 (December 11, 2023)\n\n> [!NOTE]\n> - This JSON file is intended as a means to address the transparency requirements in the upcoming EU AI Act (focus on Article 13 & 52). \n> - The file is an illustrative example as the basis for discussion and feedback.\n> - To use the file, copy the template and insert the values of the AI System at hand, using the descriptions given in the template as a guidance).\n> - The file is not a formal JSON Schema, but we may adopt the schema in the future for improved automated processing.## Call to action\n\n- Please share your feedback in Hugging Face Discussions.\n- See the call for contributions at the end of this document.## How to cite this work\n\n@AdrianGonzalezSanchez (OdiseIA, HEC Montréal, IE University, Microsoft) & appliedAI Institute for Europe gGmbH (2024). AI Software Bill of Material - Transparency (AI-SBOM). Hugging Face## Overview\n\nEU AI Act. It addresses mainly the transparency obligations outlined in Articles 13 and 52 of the AI Act to share and emphasize relevant information with various stakeholders and interested parties\n\nBOM = Bill of Material; The set of elements, an inventory, that are needed to compile or produce a product; Adopted to an AI System and inspired from areas like manufacturing and cybersecurity.",
"passage: ## Purpose of the AI-SBOM Transparency\n\nCollecting and providing the information required by Articles 13 and 52 can be challenging in complex AI value chains involving multiple entities who control or need certain information. The AI-SBOM Transparency is intended as the single point of truth for collecting and sharing the necessary information, keeping the following benefits in mind:\n\n- Overview of transparency obligations: Reducing the need for an in-depth understanding of the AI Act (saves time and effort to read 160+ pages).\n- Improves risk management in transparency: Completing the AI-SBOM helps in identifying and addressing potential vulnerabilities and dependencies related to transparency throughout the development cycle of high-risk AI systems.\n- Approach to simplify compliance with transparency requirements: Helps to ensure adherence to the AI Act's transparency requirements by collecting the relevant information, which, in turn, reduces deployment and liability risks.\n- AI-SBOM Transparency may complement and/or refer to the instructions for use (“User Manual”). It could be a first “draft” of a “User Manual” which has to be provided to the Deployer.## Target group of the AI-BOM\n\nAI-SBOM Transparency targets technical professionals engaged in compliance matters as well as compliance experts delving into technical aspects. Our goal is to support providers and deployers in managing, maintaining, and making knowledgeable choices about AI systems within the AI Act's regulations (Articles 13 and 52). Achieving this is more feasible through a collaborative approach.",
"passage: ## What is the scope of Article 13 AI Act? [EU Parlaments Proposal]\n\nArticle 13 AI Act applies to high-risk AI Systems (details in Article 6) and outlines requirements and considerations related to transparency and accountability in the deployment of an AI System. In a nutshell:\n\nArticle 13 (1): The transparency obligations are set to enable the understanding of the outcomes and functioning of the respective AI System. Specifically, it entails the obligation to ensure that: (i) the AI System will be used properly, i.e., according to its intended purpose by stating how the AI System actually works, (ii) details about the processed data are known and (iii) the AI Systems output is interpretable and can be explained to affected persons.\n\nArticle 13 (2) Requires that the high-risk AI System shall be accompanied by instructions for use [Like a “(Digital) User Manual”] that helps the deployer (the entity who is putting the AI System into use) operate and maintain the AI System as intended, as well as supporting an informed decision making by the deployer. Such a User Manual has to incorporate information referred to in Article 13 (3) and be available prior to putting the AI System into service or placing the AI System on the market.\n\nArticle 13 (3) Specifies concrete information that shall be communicated for reaching sufficient transparency to satisfy Article 13 (1). This is the focus of the AI-SBOM and includes information such as the intended purpose of the AI System, known/foreseeable risks/misuses, desired input data, affected persons etc. The AI-SBOM is not meant to replace or implement the instructions for use. The AI-SBOM aims to support in collecting such relevant information for the instructions of use during the development process of an AI System.\n\nThus, high-risk AI Systems shall be designed and developed in such a way that their operation is sufficiently transparent to assure the respective deployer (and provider themselves if they deploy their own AI System internally) appropriately interpret and use the results of the AI System [“Procedural Transparency”]. Such Procedural Transparency, as outlined in Article 13, is particularly crucial in the AI value chain perspective from the provider to the actual deployer of the AI System.## What is the scope of Article 52 AI Act? [EU Parlaments Proposal]\nArticle 52 AI Act aims to ensure the transparency of AI Systems in case natural persons and/or the general public are exposed to an AI System. This is ensured in three ways: \n\n(i) Article 52 (1): If there is an interaction of the AI System with a natural person - like a Chatbot, Healthcare Diagnosis Tools used by doctors, or AI-driven robot financial advisors - such interactions have to be made transparent through a notification to the affected natural persons [“Interaction Transparency”]. \n\n(ii) Article 52 (2): If the AI System is an emotion recognition or biometric categorization system, prior to the processing of such data, the affected person has to give their consent for such (connection to the GDPR) [“Consent Transparency”]. \n\n(iii) Article 52 (3): If the AI System is generating so-called “deep fakes”, such artificially generated content shall be disclosed in a visible manner like “watermarks” [“Content Transparency”].\n\nNotably, an AI System that is not classified as high-risk and therefore exempt from compliance with Article 13 may still be subject to the provisions of Article 52 if one of the three paragraphs applies. Conversely, if an AI System is classified as high risk, Article 52 might apply in addition."
] |
fe7397a060a91aa10108310b015330d967f067ce | # ViGEText_17to23 dataset
Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: https://github.com/uitnlp/ViGEText_17to23
```
@inproceedings{10.1145/3628797.3628837,
author = {Nguyen, Duc-Vu and Nguyen, Quoc-Nam},
title = {Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education},
year = {2023},
isbn = {9798400708916},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3628797.3628837},
doi = {10.1145/3628797.3628837},
abstract = {In this paper, we evaluate the ability of large language models (LLMs) to perform multiple choice symbol binding (MCSB) for multiple choice question answering (MCQA) tasks in zero-shot, one-shot, and few-shot settings. We focus on Vietnamese, with fewer challenging MCQA datasets than in English. The two existing datasets, ViMMRC 1.0 and ViMMRC 2.0, focus on literature. Recent research in Vietnamese natural language processing (NLP) has focused on the Vietnamese National High School Graduation Examination (VNHSGE) from 2019 to 2023 to evaluate ChatGPT. However, these studies have mainly focused on how ChatGPT solves the VNHSGE step by step. We aim to create a novel and high-quality dataset by providing structured guidelines for typing LaTeX formulas for mathematics, physics, chemistry, and biology. This dataset can be used to evaluate the MCSB ability of LLMs and smaller language models (LMs) because it is typed in a strict LaTeX style. We determine the most probable character answer (A, B, C, or D) based on context, instead of finding the answer step by step as in previous Vietnamese works. This reduces computational costs and accelerates the evaluation of LLMs. Our evaluation of six well-known LLMs, namely BLOOMZ-7.1B-MT, LLaMA-2-7B, LLaMA-2-70B, GPT-3, GPT-3.5, and GPT-4.0, on the ViMMRC 1.0 and ViMMRC 2.0 benchmarks and our proposed dataset shows promising results on the MCSB ability of LLMs for Vietnamese. The dataset is available1 for research purposes only.},
booktitle = {Proceedings of the 12th International Symposium on Information and Communication Technology},
pages = {379–386},
numpages = {8},
keywords = {Analysis of Language Models, Multiple Choice Symbol Binding, Multiple Choice Question Answering, Language Modeling},
location = {<conf-loc>, <city>Ho Chi Minh</city>, <country>Vietnam</country>, </conf-loc>},
series = {SOICT '23}
}
``` | uitnlp/ViGEText_17to23 | [
"region:us"
] | 2023-12-10T15:28:31+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13402, "num_examples": 35}, {"name": "validation", "num_bytes": 291387, "num_examples": 765}, {"name": "test", "num_bytes": 1414977, "num_examples": 3722}], "download_size": 730958, "dataset_size": 1719766}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-10T15:45:56+00:00 | [] | [] | TAGS
#region-us
| # ViGEText_17to23 dataset
Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: URL
| [
"# ViGEText_17to23 dataset\nEvaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: URL"
] | [
"TAGS\n#region-us \n",
"# ViGEText_17to23 dataset\nEvaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: URL"
] | [
6,
38
] | [
"passage: TAGS\n#region-us \n# ViGEText_17to23 dataset\nEvaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education: URL"
] |
662c1e927f24a214ca583ff7138d56a35ebdbf56 | # Dataset Summary
This dataset comprises an array of scientific papers, each paper is associated with a series of scores.
These scores quantify the number of citations each paper has received.
The data regarding the papers and their citations were sourced from OpenCitations, a comprehensive and accessible online database of scholarly citations (available at https://opencitations.net/).
# How are these scores calculated?
Imagine a tree where papers are nodes and citations are edges.
For instance, if paper A is cited by both papers B and C, it gets a score of 2 at tree depth 1 (score_depth_1).
If paper D cites B, but C receives no citations, paper A's score at tree depth 2 (score_depth_2) is 1.
These scores (score_depth_1, score_depth_2, ..., score_depth_N) are recorded for each paper, indicating the number of citations at various tree depths, up to depth N.
This method offers a detailed view of a paper's influence through its citation network.
# Why do I need this information?
Exploring the citation tree of scientific papers can be extremely useful if you decide to fine-tune an LLM and teach it to produce novel scientific concepts that are likely to be winners in the scientific community.
It is expected that papers that produce new and valid scientific concepts are cited by many other papers and these subsequent papers have themselves also a big amount of "descendants".
In other words, it may be possible to teach an LLM what a projection of a "world model" looks like since only truthful and consistent ideas will be verified by researchers and cited by future papers.
The deeper the citation tree goes for a certain paper, the higher likelyhood that it has novel, true and consistent ideas in it.
# Dataset Structure
The dataset is obtained from the ArXiv repository.
The dataset comprises rows, each represented by an object with specific attributes.
Here's a breakdown of an example row:
```
{
"omid": "omid:br/061603088748",
"pub_date": "2018-09",
"title": "Development of different methods and their efficiencies for the estimation of diffusion coefficients following the diffusion couple technique",
"doi": "10.1016/j.actamat.2018.04.051",
"link": "https://arxiv.org/abs/1809.09687",
"pdf": "https://arxiv.org/pdf/1809.09687",
"e-print": "https://arxiv.org/e-print/1809.09687",
"score-depth-1": 61,
"score-depth-2": 62,
"score-depth-3": 76,
"score-depth-4": 62,
"score-depth-5": 76
}
```
# Accessing the Dataset
The most recent version of the dataset is named **scores-max=10000-date=15_12_2023.parquet**.
This file is accessible under the "Files and versions" section. The 'max=10000' part in the filename signifies that the highest possible score is 10,000.
# Suggested Approach for Fine-Tuning/Reward Model Training
For training a Reinforcement Learning (RL) model, consider the following methodology:
- Select a subset of papers meeting specific criteria. For instance, papers with a depth-5 score greater than 9000 could be considered as 'Successful'.
- Tip: select papers only with similar ages. Very recent papers may have fewer scores since there wasn't enough time to cite them.
- Identify papers with a depth-1 score of 0, labeling these as 'Not Successful'.
- Utilize these two sets to apply RL techniques on a specific Language Learning Model (LLM), thereby fine-tuning the model based on the defined success criteria.
This approach helps in categorizing papers based on their perceived impact and using this categorization to refine the learning model's performance.
# Citation Information
```
@misc {joão_coelho_2023,
author = { {João Coelho} },
title = { scientific_papers_citation_scores (Revision b7910b4) },
year = 2023,
url = { https://huggingface.co/datasets/JoaoCoelho/scientific_papers_citation_scores },
doi = { 10.57967/hf/1489 },
publisher = { Hugging Face }
}
```
| JoaoCoelho/scientific_papers_citation_scores | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"arxiv:1809.09687",
"doi:10.57967/hf/1489",
"region:us"
] | 2023-12-10T15:36:43+00:00 | {"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]} | 2023-12-15T18:05:04+00:00 | [
"1809.09687"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #arxiv-1809.09687 #doi-10.57967/hf/1489 #region-us
| # Dataset Summary
This dataset comprises an array of scientific papers, each paper is associated with a series of scores.
These scores quantify the number of citations each paper has received.
The data regarding the papers and their citations were sourced from OpenCitations, a comprehensive and accessible online database of scholarly citations (available at URL
# How are these scores calculated?
Imagine a tree where papers are nodes and citations are edges.
For instance, if paper A is cited by both papers B and C, it gets a score of 2 at tree depth 1 (score_depth_1).
If paper D cites B, but C receives no citations, paper A's score at tree depth 2 (score_depth_2) is 1.
These scores (score_depth_1, score_depth_2, ..., score_depth_N) are recorded for each paper, indicating the number of citations at various tree depths, up to depth N.
This method offers a detailed view of a paper's influence through its citation network.
# Why do I need this information?
Exploring the citation tree of scientific papers can be extremely useful if you decide to fine-tune an LLM and teach it to produce novel scientific concepts that are likely to be winners in the scientific community.
It is expected that papers that produce new and valid scientific concepts are cited by many other papers and these subsequent papers have themselves also a big amount of "descendants".
In other words, it may be possible to teach an LLM what a projection of a "world model" looks like since only truthful and consistent ideas will be verified by researchers and cited by future papers.
The deeper the citation tree goes for a certain paper, the higher likelyhood that it has novel, true and consistent ideas in it.
# Dataset Structure
The dataset is obtained from the ArXiv repository.
The dataset comprises rows, each represented by an object with specific attributes.
Here's a breakdown of an example row:
# Accessing the Dataset
The most recent version of the dataset is named scores-max=10000-date=15_12_2023.parquet.
This file is accessible under the "Files and versions" section. The 'max=10000' part in the filename signifies that the highest possible score is 10,000.
# Suggested Approach for Fine-Tuning/Reward Model Training
For training a Reinforcement Learning (RL) model, consider the following methodology:
- Select a subset of papers meeting specific criteria. For instance, papers with a depth-5 score greater than 9000 could be considered as 'Successful'.
- Tip: select papers only with similar ages. Very recent papers may have fewer scores since there wasn't enough time to cite them.
- Identify papers with a depth-1 score of 0, labeling these as 'Not Successful'.
- Utilize these two sets to apply RL techniques on a specific Language Learning Model (LLM), thereby fine-tuning the model based on the defined success criteria.
This approach helps in categorizing papers based on their perceived impact and using this categorization to refine the learning model's performance.
| [
"# Dataset Summary\n\n\nThis dataset comprises an array of scientific papers, each paper is associated with a series of scores.\nThese scores quantify the number of citations each paper has received.\nThe data regarding the papers and their citations were sourced from OpenCitations, a comprehensive and accessible online database of scholarly citations (available at URL",
"# How are these scores calculated?\n\nImagine a tree where papers are nodes and citations are edges.\nFor instance, if paper A is cited by both papers B and C, it gets a score of 2 at tree depth 1 (score_depth_1).\nIf paper D cites B, but C receives no citations, paper A's score at tree depth 2 (score_depth_2) is 1.\nThese scores (score_depth_1, score_depth_2, ..., score_depth_N) are recorded for each paper, indicating the number of citations at various tree depths, up to depth N.\nThis method offers a detailed view of a paper's influence through its citation network.",
"# Why do I need this information?\n\nExploring the citation tree of scientific papers can be extremely useful if you decide to fine-tune an LLM and teach it to produce novel scientific concepts that are likely to be winners in the scientific community.\nIt is expected that papers that produce new and valid scientific concepts are cited by many other papers and these subsequent papers have themselves also a big amount of \"descendants\".\nIn other words, it may be possible to teach an LLM what a projection of a \"world model\" looks like since only truthful and consistent ideas will be verified by researchers and cited by future papers.\nThe deeper the citation tree goes for a certain paper, the higher likelyhood that it has novel, true and consistent ideas in it.",
"# Dataset Structure\n\nThe dataset is obtained from the ArXiv repository.\nThe dataset comprises rows, each represented by an object with specific attributes.\nHere's a breakdown of an example row:",
"# Accessing the Dataset\n\nThe most recent version of the dataset is named scores-max=10000-date=15_12_2023.parquet.\nThis file is accessible under the \"Files and versions\" section. The 'max=10000' part in the filename signifies that the highest possible score is 10,000.",
"# Suggested Approach for Fine-Tuning/Reward Model Training\n\nFor training a Reinforcement Learning (RL) model, consider the following methodology:\n\n- Select a subset of papers meeting specific criteria. For instance, papers with a depth-5 score greater than 9000 could be considered as 'Successful'.\n- Tip: select papers only with similar ages. Very recent papers may have fewer scores since there wasn't enough time to cite them.\n\n- Identify papers with a depth-1 score of 0, labeling these as 'Not Successful'.\n\n- Utilize these two sets to apply RL techniques on a specific Language Learning Model (LLM), thereby fine-tuning the model based on the defined success criteria.\n\nThis approach helps in categorizing papers based on their perceived impact and using this categorization to refine the learning model's performance."
] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #arxiv-1809.09687 #doi-10.57967/hf/1489 #region-us \n",
"# Dataset Summary\n\n\nThis dataset comprises an array of scientific papers, each paper is associated with a series of scores.\nThese scores quantify the number of citations each paper has received.\nThe data regarding the papers and their citations were sourced from OpenCitations, a comprehensive and accessible online database of scholarly citations (available at URL",
"# How are these scores calculated?\n\nImagine a tree where papers are nodes and citations are edges.\nFor instance, if paper A is cited by both papers B and C, it gets a score of 2 at tree depth 1 (score_depth_1).\nIf paper D cites B, but C receives no citations, paper A's score at tree depth 2 (score_depth_2) is 1.\nThese scores (score_depth_1, score_depth_2, ..., score_depth_N) are recorded for each paper, indicating the number of citations at various tree depths, up to depth N.\nThis method offers a detailed view of a paper's influence through its citation network.",
"# Why do I need this information?\n\nExploring the citation tree of scientific papers can be extremely useful if you decide to fine-tune an LLM and teach it to produce novel scientific concepts that are likely to be winners in the scientific community.\nIt is expected that papers that produce new and valid scientific concepts are cited by many other papers and these subsequent papers have themselves also a big amount of \"descendants\".\nIn other words, it may be possible to teach an LLM what a projection of a \"world model\" looks like since only truthful and consistent ideas will be verified by researchers and cited by future papers.\nThe deeper the citation tree goes for a certain paper, the higher likelyhood that it has novel, true and consistent ideas in it.",
"# Dataset Structure\n\nThe dataset is obtained from the ArXiv repository.\nThe dataset comprises rows, each represented by an object with specific attributes.\nHere's a breakdown of an example row:",
"# Accessing the Dataset\n\nThe most recent version of the dataset is named scores-max=10000-date=15_12_2023.parquet.\nThis file is accessible under the \"Files and versions\" section. The 'max=10000' part in the filename signifies that the highest possible score is 10,000.",
"# Suggested Approach for Fine-Tuning/Reward Model Training\n\nFor training a Reinforcement Learning (RL) model, consider the following methodology:\n\n- Select a subset of papers meeting specific criteria. For instance, papers with a depth-5 score greater than 9000 could be considered as 'Successful'.\n- Tip: select papers only with similar ages. Very recent papers may have fewer scores since there wasn't enough time to cite them.\n\n- Identify papers with a depth-1 score of 0, labeling these as 'Not Successful'.\n\n- Utilize these two sets to apply RL techniques on a specific Language Learning Model (LLM), thereby fine-tuning the model based on the defined success criteria.\n\nThis approach helps in categorizing papers based on their perceived impact and using this categorization to refine the learning model's performance."
] | [
53,
80,
163,
169,
53,
72,
200
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #arxiv-1809.09687 #doi-10.57967/hf/1489 #region-us \n# Dataset Summary\n\n\nThis dataset comprises an array of scientific papers, each paper is associated with a series of scores.\nThese scores quantify the number of citations each paper has received.\nThe data regarding the papers and their citations were sourced from OpenCitations, a comprehensive and accessible online database of scholarly citations (available at URL# How are these scores calculated?\n\nImagine a tree where papers are nodes and citations are edges.\nFor instance, if paper A is cited by both papers B and C, it gets a score of 2 at tree depth 1 (score_depth_1).\nIf paper D cites B, but C receives no citations, paper A's score at tree depth 2 (score_depth_2) is 1.\nThese scores (score_depth_1, score_depth_2, ..., score_depth_N) are recorded for each paper, indicating the number of citations at various tree depths, up to depth N.\nThis method offers a detailed view of a paper's influence through its citation network.# Why do I need this information?\n\nExploring the citation tree of scientific papers can be extremely useful if you decide to fine-tune an LLM and teach it to produce novel scientific concepts that are likely to be winners in the scientific community.\nIt is expected that papers that produce new and valid scientific concepts are cited by many other papers and these subsequent papers have themselves also a big amount of \"descendants\".\nIn other words, it may be possible to teach an LLM what a projection of a \"world model\" looks like since only truthful and consistent ideas will be verified by researchers and cited by future papers.\nThe deeper the citation tree goes for a certain paper, the higher likelyhood that it has novel, true and consistent ideas in it."
] |
9101e90fb635d44d0d2d40e74e4a79df8a97d281 | # Touch Rugby Rules Dataset (for embeddings)
train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
test.csv is copy pasted from abbreviated rules on the [UK Touch website](https://www.englandtouch.org.uk/develop/coaching/the-rules/). Note that I'm bypassing the pdf to text stage.
All text is chunked to a length of 100 tokens with 50% overlap.
For educational and non-commercial use only. | gianlucar/Rugby_test | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"touch rugby",
"region:us"
] | 2023-12-10T16:16:31+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["fine-tuning", "touch rugby"]} | 2023-12-10T16:19:44+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us
| # Touch Rugby Rules Dataset (for embeddings)
URL is taken from the International Touch Website
URL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.
All text is chunked to a length of 100 tokens with 50% overlap.
For educational and non-commercial use only. | [
"# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] | [
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n",
"# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] | [
39,
81
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #fine-tuning #touch rugby #region-us \n# Touch Rugby Rules Dataset (for embeddings)\n\nURL is taken from the International Touch Website\n\nURL is copy pasted from abbreviated rules on the UK Touch website. Note that I'm bypassing the pdf to text stage.\n\nAll text is chunked to a length of 100 tokens with 50% overlap.\n\nFor educational and non-commercial use only."
] |
b9bff46aa06d237c8bd1febf549f0742a79e310e |
# Bangumi Image Base of Alice Gear Aegis Expansion
This is the image base of bangumi Alice Gear Aegis Expansion, we detected 40 characters, 2536 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 426 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 40 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 240 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 31 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 21 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 30 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 79 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 17 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 93 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 15 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 13 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 238 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 36 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 51 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 401 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 36 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 37 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 9 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 9 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 28 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 42 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 36 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 30 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 64 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 28 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 10 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 21 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 45 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 18 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 47 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 19 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 9 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 7 | [Download](32/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 33 | 11 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 35 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 58 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 17 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 8 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 5 | [Download](38/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| noise | 176 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/alicegearaegisexpansion | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-12-10T17:08:37+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-12-10T18:22:51+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Alice Gear Aegis Expansion
================================================
This is the image base of bangumi Alice Gear Aegis Expansion, we detected 40 characters, 2536 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] | [
25
] | [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
c1a67c18df1a52ec332ffc00418f4eaa61a9f2bc |
**Dataset Summary**
* The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).
* Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care.
* For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.
* Summary taken from [Application of the AI System for Pneumonia Detection Using Chest X-Ray Images](https://www.cell.com/cell/fulltext/S0092-8674(18)30154-5?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867418301545%3Fshowall%3Dtrue)
* [Dataset source](https://data.mendeley.com/datasets/rscbjbr9sj/2)
**Citation
Citation: Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2018), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, doi: 10.17632/rscbjbr9sj.2 | hf-vision/chest-xray-pneumonia | [
"license:cc-by-4.0",
"region:us"
] | 2023-12-10T17:21:01+00:00 | {"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "NORMAL", "1": "PNEUMONIA"}}}}], "splits": [{"name": "train", "num_bytes": 3186635036.504, "num_examples": 5216}, {"name": "validation", "num_bytes": 3030633, "num_examples": 16}, {"name": "test", "num_bytes": 79062317, "num_examples": 624}], "download_size": 1230487171, "dataset_size": 3268727986.504}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-11T13:23:37+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
Dataset Summary
* The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal).
* Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care.
* For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert.
* Summary taken from Application of the AI System for Pneumonia Detection Using Chest X-Ray Images30154-5?_returnURL=https%3A%2F%URL%2Fretrieve%2Fpii%2FS0092867418301545%3Fshowall%3Dtrue)
* Dataset source
Citation
Citation: Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2018), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, doi: 10.17632/rscbjbr9sj.2 | [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] | [
15
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
] |
171513d4ab5e0a7b57e1c11ab967f10266481de9 |
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-7B-v3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-7B-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T21:54:18.910589](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-7B-v3/blob/main/results_2023-12-11T21-54-18.910589.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6546412729180768,
"acc_stderr": 0.03206351393693139,
"acc_norm": 0.6544628324282981,
"acc_norm_stderr": 0.03272956247677216,
"mc1": 0.4467564259485924,
"mc1_stderr": 0.017403977522557144,
"mc2": 0.6041920821344222,
"mc2_stderr": 0.015321517108315672
},
"harness|arc:challenge|25": {
"acc": 0.6612627986348123,
"acc_stderr": 0.01383056892797433,
"acc_norm": 0.6953924914675768,
"acc_norm_stderr": 0.013449522109932489
},
"harness|hellaswag|10": {
"acc": 0.6811392152957578,
"acc_stderr": 0.004650825168905205,
"acc_norm": 0.8681537542322246,
"acc_norm_stderr": 0.0033763209559167064
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.0255428468174005,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.0255428468174005
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.02390491431178265,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.02390491431178265
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.029620227874790482,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.029620227874790482
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8513761467889909,
"acc_stderr": 0.015251253773660834,
"acc_norm": 0.8513761467889909,
"acc_norm_stderr": 0.015251253773660834
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944863,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944863
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608311,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608311
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.023532925431044287,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.023532925431044287
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4122905027932961,
"acc_stderr": 0.016463200238114515,
"acc_norm": 0.4122905027932961,
"acc_norm_stderr": 0.016463200238114515
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.025553169991826524,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.025553169991826524
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188936,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188936
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7623456790123457,
"acc_stderr": 0.023683591837008564,
"acc_norm": 0.7623456790123457,
"acc_norm_stderr": 0.023683591837008564
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46479791395045633,
"acc_stderr": 0.012738547371303957,
"acc_norm": 0.46479791395045633,
"acc_norm_stderr": 0.012738547371303957
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.01879808628488689,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.01879808628488689
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274648,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274648
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578337,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578337
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4467564259485924,
"mc1_stderr": 0.017403977522557144,
"mc2": 0.6041920821344222,
"mc2_stderr": 0.015321517108315672
},
"harness|winogrande|5": {
"acc": 0.8176795580110497,
"acc_stderr": 0.010851565594267198
},
"harness|gsm8k|5": {
"acc": 0.7232752084912812,
"acc_stderr": 0.012323047397959794
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-7B-v3 | [
"region:us"
] | 2023-12-10T17:24:53+00:00 | {"pretty_name": "Evaluation run of AIDC-ai-business/Marcoroni-7B-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-7B-v3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T21:54:18.910589](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-7B-v3/blob/main/results_2023-12-11T21-54-18.910589.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6546412729180768,\n \"acc_stderr\": 0.03206351393693139,\n \"acc_norm\": 0.6544628324282981,\n \"acc_norm_stderr\": 0.03272956247677216,\n \"mc1\": 0.4467564259485924,\n \"mc1_stderr\": 0.017403977522557144,\n \"mc2\": 0.6041920821344222,\n \"mc2_stderr\": 0.015321517108315672\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6612627986348123,\n \"acc_stderr\": 0.01383056892797433,\n \"acc_norm\": 0.6953924914675768,\n \"acc_norm_stderr\": 0.013449522109932489\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6811392152957578,\n \"acc_stderr\": 0.004650825168905205,\n \"acc_norm\": 0.8681537542322246,\n \"acc_norm_stderr\": 0.0033763209559167064\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7283018867924528,\n \"acc_stderr\": 0.027377706624670713,\n \"acc_norm\": 0.7283018867924528,\n \"acc_norm_stderr\": 0.027377706624670713\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.0255428468174005,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.0255428468174005\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.02390491431178265,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.02390491431178265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790482,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790482\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944863,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944863\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608311,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608311\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.023532925431044287,\n \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.023532925431044287\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4122905027932961,\n \"acc_stderr\": 0.016463200238114515,\n \"acc_norm\": 0.4122905027932961,\n \"acc_norm_stderr\": 0.016463200238114515\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826524,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826524\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.025670259242188936,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.025670259242188936\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7623456790123457,\n \"acc_stderr\": 0.023683591837008564,\n \"acc_norm\": 0.7623456790123457,\n \"acc_norm_stderr\": 0.023683591837008564\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.01879808628488689,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.01879808628488689\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.025538433368578337,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.025538433368578337\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4467564259485924,\n \"mc1_stderr\": 0.017403977522557144,\n \"mc2\": 0.6041920821344222,\n \"mc2_stderr\": 0.015321517108315672\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8176795580110497,\n \"acc_stderr\": 0.010851565594267198\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7232752084912812,\n \"acc_stderr\": 0.012323047397959794\n }\n}\n```", "repo_url": "https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|arc:challenge|25_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|arc:challenge|25_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|arc:challenge|25_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|arc:challenge|25_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|gsm8k|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|gsm8k|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|gsm8k|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|gsm8k|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hellaswag|10_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hellaswag|10_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hellaswag|10_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hellaswag|10_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T17-21-59.510264.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T23-59-25.388550.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T00-01-07.651038.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T21-54-18.910589.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["**/details_harness|winogrande|5_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["**/details_harness|winogrande|5_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["**/details_harness|winogrande|5_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["**/details_harness|winogrande|5_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T21-54-18.910589.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_10T17_21_59.510264", "path": ["results_2023-12-10T17-21-59.510264.parquet"]}, {"split": "2023_12_10T23_59_25.388550", "path": ["results_2023-12-10T23-59-25.388550.parquet"]}, {"split": "2023_12_11T00_01_07.651038", "path": ["results_2023-12-11T00-01-07.651038.parquet"]}, {"split": "2023_12_11T21_54_18.910589", "path": ["results_2023-12-11T21-54-18.910589.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T21-54-18.910589.parquet"]}]}]} | 2023-12-11T21:57:57+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-7B-v3
Dataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-7B-v3 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T21:54:18.910589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-7B-v3\n\n\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T21:54:18.910589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-7B-v3\n\n\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T21:54:18.910589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
191,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-7B-v3\n\n\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T21:54:18.910589(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
12f965ab9acd60a75ec66a33d7660ebdc659f55e | # Dataset Card for "dbpedia-entities-splade-10K"
This dataset has both OpenAI and SPLADE vectors for 10,000 DBPedia entries. This adds SPLADE Vectors to [KShivendu/dbpedia-entities-openai-1M/](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M/)
Model id used to make these vectors:
```python
model_id = "naver/splade-cocondenser-ensembledistil"
```
This is available on Huggingface.
If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:
```python
import numpy as np
vec = np.array(ds[0]['vec']) # where ds is the dataset
sparse_indices = vec.nonzero()
sparse_values = vec[sparse_indices]
sparse_indices, sparse_values
``` | nirantk/dbpedia-entities-splade-ensembledistil-10K | [
"task_categories:question-answering",
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-10T17:38:41+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "feature-extraction"], "pretty_name": "DBPedia SPLADE + OpenAI: 10,000 SPLADE Sparse Vectors + OpenAI Embedding", "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "openai", "sequence": "float32"}, {"name": "embed_text", "dtype": "string"}, {"name": "vec", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 1289801820, "num_examples": 10000}], "download_size": 0, "dataset_size": 1289801820}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-10T17:53:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
| # Dataset Card for "dbpedia-entities-splade-10K"
This dataset has both OpenAI and SPLADE vectors for 10,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/
Model id used to make these vectors:
This is available on Huggingface.
If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:
| [
"# Dataset Card for \"dbpedia-entities-splade-10K\"\n\nThis dataset has both OpenAI and SPLADE vectors for 10,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nThis is available on Huggingface. \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] | [
"TAGS\n#task_categories-question-answering #task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for \"dbpedia-entities-splade-10K\"\n\nThis dataset has both OpenAI and SPLADE vectors for 10,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nThis is available on Huggingface. \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] | [
54,
112
] | [
"passage: TAGS\n#task_categories-question-answering #task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"dbpedia-entities-splade-10K\"\n\nThis dataset has both OpenAI and SPLADE vectors for 10,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nThis is available on Huggingface. \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] |
ae2e2cb88bf8e837adae171f0b4cc4f677578e26 | # RealMedQA
RealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs.
The questions were created by medical students and a large language model (LLM), while the answers are guideline
recommendations provided by the UK's National Institute for Health and Care Excellence (NICE).
## Dataset creation
## Data collection
Initially, 12,543 guidelines were retrieved using the [NICE syndication API](https://www.nice.org.uk/about/what-we-do/nice-syndication-api).
As we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines
that came under 'Conditions and diseases' which reduced the number to 7,385.
## Question generation
We created an instruction sheet with examples which we provided to both the humans (medical students)
and the LLM to generate the several questions for each guideline recommendation. The instruction sheet
was fed as a prompt along with each recommendation to the LLM, while the humans created the questions using
Google forms.
## Question verification
Both the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality.
The verifiers were asked whether each question:
* Could plausibly be asked by a clinician would ask in practice;
* Was adequately answered by the accompanying recommendation.
A total of 800 human QA pairs and 400 LLM QA pairs were verified.
## Dataset structure
The dataset is structured according to the following columns:
* **Generator:** whether the question was generated by a 'Human' or 'LLM';
* **Question:** the question created by either the human or LLM;
* **Recommendation:** the recommendation that is supposed to address the question;
* **Plausible:** whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier);
* **Answered:** whether the question is adequately answered by the clinician (as assessed by the verifier).
## Contact
* Gregory Kell: [email protected]
* Iain Marshall: [email protected] | k2141255/RealMedQA | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"medical",
"region:us"
] | 2023-12-10T17:42:15+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "d", "tags": ["medical"]} | 2023-12-11T11:43:55+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #medical #region-us
| # RealMedQA
RealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs.
The questions were created by medical students and a large language model (LLM), while the answers are guideline
recommendations provided by the UK's National Institute for Health and Care Excellence (NICE).
## Dataset creation
## Data collection
Initially, 12,543 guidelines were retrieved using the NICE syndication API.
As we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines
that came under 'Conditions and diseases' which reduced the number to 7,385.
## Question generation
We created an instruction sheet with examples which we provided to both the humans (medical students)
and the LLM to generate the several questions for each guideline recommendation. The instruction sheet
was fed as a prompt along with each recommendation to the LLM, while the humans created the questions using
Google forms.
## Question verification
Both the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality.
The verifiers were asked whether each question:
* Could plausibly be asked by a clinician would ask in practice;
* Was adequately answered by the accompanying recommendation.
A total of 800 human QA pairs and 400 LLM QA pairs were verified.
## Dataset structure
The dataset is structured according to the following columns:
* Generator: whether the question was generated by a 'Human' or 'LLM';
* Question: the question created by either the human or LLM;
* Recommendation: the recommendation that is supposed to address the question;
* Plausible: whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier);
* Answered: whether the question is adequately answered by the clinician (as assessed by the verifier).
## Contact
* Gregory Kell: URL@URL
* Iain Marshall: iain.marshall@URL | [
"# RealMedQA\n\nRealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs.\nThe questions were created by medical students and a large language model (LLM), while the answers are guideline \nrecommendations provided by the UK's National Institute for Health and Care Excellence (NICE).",
"## Dataset creation",
"## Data collection\nInitially, 12,543 guidelines were retrieved using the NICE syndication API.\nAs we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines\nthat came under 'Conditions and diseases' which reduced the number to 7,385.",
"## Question generation\nWe created an instruction sheet with examples which we provided to both the humans (medical students) \nand the LLM to generate the several questions for each guideline recommendation. The instruction sheet \nwas fed as a prompt along with each recommendation to the LLM, while the humans created the questions using \nGoogle forms.",
"## Question verification\nBoth the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality.\nThe verifiers were asked whether each question:\n* Could plausibly be asked by a clinician would ask in practice;\n* Was adequately answered by the accompanying recommendation.\n\nA total of 800 human QA pairs and 400 LLM QA pairs were verified.",
"## Dataset structure\n\nThe dataset is structured according to the following columns:\n* Generator: whether the question was generated by a 'Human' or 'LLM';\n* Question: the question created by either the human or LLM;\n* Recommendation: the recommendation that is supposed to address the question;\n* Plausible: whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier);\n* Answered: whether the question is adequately answered by the clinician (as assessed by the verifier).",
"## Contact\n* Gregory Kell: URL@URL\n* Iain Marshall: iain.marshall@URL"
] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #medical #region-us \n",
"# RealMedQA\n\nRealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs.\nThe questions were created by medical students and a large language model (LLM), while the answers are guideline \nrecommendations provided by the UK's National Institute for Health and Care Excellence (NICE).",
"## Dataset creation",
"## Data collection\nInitially, 12,543 guidelines were retrieved using the NICE syndication API.\nAs we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines\nthat came under 'Conditions and diseases' which reduced the number to 7,385.",
"## Question generation\nWe created an instruction sheet with examples which we provided to both the humans (medical students) \nand the LLM to generate the several questions for each guideline recommendation. The instruction sheet \nwas fed as a prompt along with each recommendation to the LLM, while the humans created the questions using \nGoogle forms.",
"## Question verification\nBoth the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality.\nThe verifiers were asked whether each question:\n* Could plausibly be asked by a clinician would ask in practice;\n* Was adequately answered by the accompanying recommendation.\n\nA total of 800 human QA pairs and 400 LLM QA pairs were verified.",
"## Dataset structure\n\nThe dataset is structured according to the following columns:\n* Generator: whether the question was generated by a 'Human' or 'LLM';\n* Question: the question created by either the human or LLM;\n* Recommendation: the recommendation that is supposed to address the question;\n* Plausible: whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier);\n* Answered: whether the question is adequately answered by the clinician (as assessed by the verifier).",
"## Contact\n* Gregory Kell: URL@URL\n* Iain Marshall: iain.marshall@URL"
] | [
48,
68,
4,
64,
70,
93,
122,
23
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #medical #region-us \n# RealMedQA\n\nRealMedQA is a biomedical question answering dataset consisting of realistic question and answer pairs.\nThe questions were created by medical students and a large language model (LLM), while the answers are guideline \nrecommendations provided by the UK's National Institute for Health and Care Excellence (NICE).## Dataset creation## Data collection\nInitially, 12,543 guidelines were retrieved using the NICE syndication API.\nAs we were interested in only the guidelines that pertain to clinical practice, we only used the guidelines\nthat came under 'Conditions and diseases' which reduced the number to 7,385.## Question generation\nWe created an instruction sheet with examples which we provided to both the humans (medical students) \nand the LLM to generate the several questions for each guideline recommendation. The instruction sheet \nwas fed as a prompt along with each recommendation to the LLM, while the humans created the questions using \nGoogle forms.## Question verification\nBoth the QA pairs generated by the LLM and those generated by human annotators were verified by humans for quality.\nThe verifiers were asked whether each question:\n* Could plausibly be asked by a clinician would ask in practice;\n* Was adequately answered by the accompanying recommendation.\n\nA total of 800 human QA pairs and 400 LLM QA pairs were verified.## Dataset structure\n\nThe dataset is structured according to the following columns:\n* Generator: whether the question was generated by a 'Human' or 'LLM';\n* Question: the question created by either the human or LLM;\n* Recommendation: the recommendation that is supposed to address the question;\n* Plausible: whether the question could plausibly be asked by a clinician in practice (as assessed by the verifier);\n* Answered: whether the question is adequately answered by the clinician (as assessed by the verifier).## Contact\n* Gregory Kell: URL@URL\n* Iain Marshall: iain.marshall@URL"
] |
e2db7b0a6f35b469dbf868c63653403728499076 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | linqus/github-issues | [
"region:us"
] | 2023-12-10T17:56:24+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "labels", "list": [{"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "assignees", "list": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "milestone", "struct": [{"name": "closed_at", "dtype": "string"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "created_at", "dtype": "string"}, {"name": "creator", "struct": [{"name": "avatar_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "login", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "description", "dtype": "string"}, {"name": "due_on", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "labels_url", "dtype": "string"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "open_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "updated_at", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "updated_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "closed_at", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "float64"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "total_count", "dtype": "int64"}, {"name": "url", "dtype": "string"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "float64"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "float64"}, {"name": "pull_request", "struct": [{"name": "diff_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "merged_at", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1717058, "num_examples": 100}], "download_size": 564909, "dataset_size": 1717058}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-10T17:57:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
97ef385adf8e7b7c162b5ff0581a8cfe26331ed0 |
# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-3-Slerp
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3-Slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T17:57:49.451204](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3-Slerp/blob/main/results_2023-12-10T17-57-49.451204.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6272160356239721,
"acc_stderr": 0.03276418695667091,
"acc_norm": 0.6266234292162511,
"acc_norm_stderr": 0.03344601323704533,
"mc1": 0.47368421052631576,
"mc1_stderr": 0.017479241161975526,
"mc2": 0.6319769000319811,
"mc2_stderr": 0.0150681826970418
},
"harness|arc:challenge|25": {
"acc": 0.6467576791808873,
"acc_stderr": 0.013967822714840055,
"acc_norm": 0.6663822525597269,
"acc_norm_stderr": 0.013778687054176536
},
"harness|hellaswag|10": {
"acc": 0.6664011153156741,
"acc_stderr": 0.0047053471376996185,
"acc_norm": 0.8543118900617407,
"acc_norm_stderr": 0.003520722505332094
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.03878139888797612,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.03878139888797612
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.038009680605548594,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.038009680605548594
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.036430371689585475,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.036430371689585475
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37566137566137564,
"acc_stderr": 0.024942368931159798,
"acc_norm": 0.37566137566137564,
"acc_norm_stderr": 0.024942368931159798
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.02489246917246283,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.02489246917246283
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.030088629490217487,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.030088629490217487
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758733,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.617948717948718,
"acc_stderr": 0.024635549163908234,
"acc_norm": 0.617948717948718,
"acc_norm_stderr": 0.024635549163908234
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.03120469122515002,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.03120469122515002
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.016129271025099867,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.016129271025099867
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.0318114974705536,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.0318114974705536
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077802,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077802
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8020434227330779,
"acc_stderr": 0.014248873549217575,
"acc_norm": 0.8020434227330779,
"acc_norm_stderr": 0.014248873549217575
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.02454761779480383,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.02454761779480383
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4491620111731844,
"acc_stderr": 0.01663583834163192,
"acc_norm": 0.4491620111731844,
"acc_norm_stderr": 0.01663583834163192
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.02641560191438898,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.02641560191438898
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6784565916398714,
"acc_stderr": 0.026527724079528872,
"acc_norm": 0.6784565916398714,
"acc_norm_stderr": 0.026527724079528872
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7006172839506173,
"acc_stderr": 0.025483115601195455,
"acc_norm": 0.7006172839506173,
"acc_norm_stderr": 0.025483115601195455
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.02970045324729146,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.02970045324729146
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4256844850065189,
"acc_stderr": 0.012628393551811947,
"acc_norm": 0.4256844850065189,
"acc_norm_stderr": 0.012628393551811947
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6323529411764706,
"acc_stderr": 0.029289413409403192,
"acc_norm": 0.6323529411764706,
"acc_norm_stderr": 0.029289413409403192
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6290849673202614,
"acc_stderr": 0.019542101564854125,
"acc_norm": 0.6290849673202614,
"acc_norm_stderr": 0.019542101564854125
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421606,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421606
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.47368421052631576,
"mc1_stderr": 0.017479241161975526,
"mc2": 0.6319769000319811,
"mc2_stderr": 0.0150681826970418
},
"harness|winogrande|5": {
"acc": 0.7971586424625099,
"acc_stderr": 0.011301439925936662
},
"harness|gsm8k|5": {
"acc": 0.6997725549658832,
"acc_stderr": 0.01262542315228303
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3-Slerp | [
"region:us"
] | 2023-12-10T18:00:41+00:00 | {"pretty_name": "Evaluation run of Intel/neural-chat-7b-v3-3-Slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3-Slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-10T17:57:49.451204](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3-Slerp/blob/main/results_2023-12-10T17-57-49.451204.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6272160356239721,\n \"acc_stderr\": 0.03276418695667091,\n \"acc_norm\": 0.6266234292162511,\n \"acc_norm_stderr\": 0.03344601323704533,\n \"mc1\": 0.47368421052631576,\n \"mc1_stderr\": 0.017479241161975526,\n \"mc2\": 0.6319769000319811,\n \"mc2_stderr\": 0.0150681826970418\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6467576791808873,\n \"acc_stderr\": 0.013967822714840055,\n \"acc_norm\": 0.6663822525597269,\n \"acc_norm_stderr\": 0.013778687054176536\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6664011153156741,\n \"acc_stderr\": 0.0047053471376996185,\n \"acc_norm\": 0.8543118900617407,\n \"acc_norm_stderr\": 0.003520722505332094\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797612,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797612\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.036430371689585475,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.036430371689585475\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37566137566137564,\n \"acc_stderr\": 0.024942368931159798,\n \"acc_norm\": 0.37566137566137564,\n \"acc_norm_stderr\": 0.024942368931159798\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n \"acc_stderr\": 0.02489246917246283,\n \"acc_norm\": 0.7419354838709677,\n \"acc_norm_stderr\": 0.02489246917246283\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n \"acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.617948717948718,\n \"acc_stderr\": 0.024635549163908234,\n \"acc_norm\": 0.617948717948718,\n \"acc_norm_stderr\": 0.024635549163908234\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.03120469122515002,\n \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.03120469122515002\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8293577981651377,\n \"acc_stderr\": 0.016129271025099867,\n \"acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.016129271025099867\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n \"acc_stderr\": 0.0318114974705536,\n \"acc_norm\": 0.6591928251121076,\n \"acc_norm_stderr\": 0.0318114974705536\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077802,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077802\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8020434227330779,\n \"acc_stderr\": 0.014248873549217575,\n \"acc_norm\": 0.8020434227330779,\n \"acc_norm_stderr\": 0.014248873549217575\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.02454761779480383,\n \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.02454761779480383\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4491620111731844,\n \"acc_stderr\": 0.01663583834163192,\n \"acc_norm\": 0.4491620111731844,\n \"acc_norm_stderr\": 0.01663583834163192\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6928104575163399,\n \"acc_stderr\": 0.02641560191438898,\n \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.02641560191438898\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6784565916398714,\n \"acc_stderr\": 0.026527724079528872,\n \"acc_norm\": 0.6784565916398714,\n \"acc_norm_stderr\": 0.026527724079528872\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7006172839506173,\n \"acc_stderr\": 0.025483115601195455,\n \"acc_norm\": 0.7006172839506173,\n \"acc_norm_stderr\": 0.025483115601195455\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.45390070921985815,\n \"acc_stderr\": 0.02970045324729146,\n \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.02970045324729146\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4256844850065189,\n \"acc_stderr\": 0.012628393551811947,\n \"acc_norm\": 0.4256844850065189,\n \"acc_norm_stderr\": 0.012628393551811947\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6290849673202614,\n \"acc_stderr\": 0.019542101564854125,\n \"acc_norm\": 0.6290849673202614,\n \"acc_norm_stderr\": 0.019542101564854125\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421606,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421606\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.47368421052631576,\n \"mc1_stderr\": 0.017479241161975526,\n \"mc2\": 0.6319769000319811,\n \"mc2_stderr\": 0.0150681826970418\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7971586424625099,\n \"acc_stderr\": 0.011301439925936662\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6997725549658832,\n \"acc_stderr\": 0.01262542315228303\n }\n}\n```", "repo_url": "https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|arc:challenge|25_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|gsm8k|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hellaswag|10_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T17-57-49.451204.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["**/details_harness|winogrande|5_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-10T17-57-49.451204.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_10T17_57_49.451204", "path": ["results_2023-12-10T17-57-49.451204.parquet"]}, {"split": "latest", "path": ["results_2023-12-10T17-57-49.451204.parquet"]}]}]} | 2023-12-10T18:01:24+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-3-Slerp
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-3-Slerp on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-10T17:57:49.451204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-3-Slerp",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T17:57:49.451204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-3-Slerp",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T17:57:49.451204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-3-Slerp## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-3-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-10T17:57:49.451204(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b2173bd318f1773ddd5664477fe2d9ec03581493 |
## Description
I explore the past so you don't have too!
## Prompt
A channel run by an influencer and videoblogger called Jess the rich.
She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes.
She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos
She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos | Mikelue/ai-tube-miss-beastg | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-10T18:07:07+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "Jess the rich"} | 2023-12-13T20:33:22+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
I explore the past so you don't have too!
## Prompt
A channel run by an influencer and videoblogger called Jess the rich.
She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes.
She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos
She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos | [
"## Description\n\nI explore the past so you don't have too!",
"## Prompt\n\nA channel run by an influencer and videoblogger called Jess the rich.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nI explore the past so you don't have too!",
"## Prompt\n\nA channel run by an influencer and videoblogger called Jess the rich.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] | [
19,
14,
133
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nI explore the past so you don't have too!## Prompt\n\nA channel run by an influencer and videoblogger called Jess the rich.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] |
66bf10b35754cdac8f2020617b49f019672e0165 |
## Description
I explore the past so you don't have too!
## Prompt
A channel run by an influencer and videoblogger called Jess.
She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes.
She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos
She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos | Mikelue/ai-tube-jess | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-10T18:59:26+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "Jess"} | 2024-02-15T17:53:51+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
I explore the past so you don't have too!
## Prompt
A channel run by an influencer and videoblogger called Jess.
She often do weird challenges like "saying yes to everyone", "walking to corss the united states", "walk in new york dressed as a chicken" to get millions of views and likes.
She also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos
She is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos | [
"## Description\n\nI explore the past so you don't have too!",
"## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nI explore the past so you don't have too!",
"## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] | [
19,
14,
131
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nI explore the past so you don't have too!## Prompt\n\nA channel run by an influencer and videoblogger called Jess.\n\nShe often do weird challenges like \"saying yes to everyone\", \"walking to corss the united states\", \"walk in new york dressed as a chicken\" to get millions of views and likes.\n\nShe also sometimes give tips and advices for make-up, beauty, dating etc, but she now makes random videos\n\nShe is also a pro gamer, enjoying games like League of Legends, Fortnite, Call of Duty, The Sims, GTA 5, Baldur's Gate 3, but she now makes random videos"
] |
b83d6dd52b38612f34fa41d8acaab3212173128d | # DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding
This dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to [KShivendu/dbpedia-entities-openai-1M/](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M)
Model id used to make these vectors:
```python
model_id = "naver/efficient-splade-VI-BT-large-doc"
```
For processing the query, use this:
```python
model_id = "naver/efficient-splade-VI-BT-large-query"
```
If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:
```python
import numpy as np
vec = np.array(ds[0]['vec']) # where ds is the dataset
def get_indices_values(vec):
sparse_indices = vec.nonzero()
sparse_values = vec[sparse_indices]
return sparse_indices, sparse_values
``` | nirantk/dbpedia-entities-efficient-splade-100K | [
"task_categories:feature-extraction",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-10T19:22:47+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["feature-extraction"], "pretty_name": "DBPedia SPLADE + OpenAI: 100,000 Vectors", "dataset_info": {"features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "openai", "sequence": "float32"}, {"name": "splade", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 12862697823, "num_examples": 100000}], "download_size": 901410913, "dataset_size": 12862697823}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T03:19:32+00:00 | [] | [
"en"
] | TAGS
#task_categories-feature-extraction #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
| # DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding
This dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/
Model id used to make these vectors:
For processing the query, use this:
If you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:
| [
"# DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding\n\nThis dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nFor processing the query, use this: \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] | [
"TAGS\n#task_categories-feature-extraction #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n",
"# DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding\n\nThis dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nFor processing the query, use this: \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] | [
42,
126
] | [
"passage: TAGS\n#task_categories-feature-extraction #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# DBPedia SPLADE + OpenAI: 100,000 SPLADE Sparse Vectors + OpenAI Embedding\n\nThis dataset has both OpenAI and SPLADE vectors for 100,000 DBPedia entries. This adds SPLADE Vectors to KShivendu/dbpedia-entities-openai-1M/\n\nModel id used to make these vectors: \n\n\nFor processing the query, use this: \n\n\nIf you'd like to extract the indices and weights/values from the vectors, you can do so using the following snippet:"
] |
a7639b66646e7c472d899b6fedaa69c892d72094 | # Dataset Card for "fw_frame_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anyspeech/fw_frame_labels | [
"region:us"
] | 2023-12-10T19:36:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "ana1239", "path": "data/ana1239-*"}, {"split": "arap1274", "path": "data/arap1274-*"}, {"split": "bain1259", "path": "data/bain1259-*"}, {"split": "beja1238", "path": "data/beja1238-*"}, {"split": "bora1263", "path": "data/bora1263-*"}, {"split": "cabe1245", "path": "data/cabe1245-*"}, {"split": "daaki_port1286", "path": "data/daaki_port1286-*"}, {"split": "dolg1241", "path": "data/dolg1241-*"}, {"split": "even1259", "path": "data/even1259-*"}, {"split": "goro1270", "path": "data/goro1270-*"}, {"split": "jeha1242", "path": "data/jeha1242-*"}, {"split": "jeju1234", "path": "data/jeju1234-*"}, {"split": "kaka1265", "path": "data/kaka1265-*"}, {"split": "kama1351", "path": "data/kama1351-*"}, {"split": "komn1238", "path": "data/komn1238-*"}, {"split": "mojeno_trinitario_trin178", "path": "data/mojeno_trinitario_trin178-*"}, {"split": "ngal1292", "path": "data/ngal1292-*"}, {"split": "nngg1234", "path": "data/nngg1234-*"}, {"split": "nort2641", "path": "data/nort2641-*"}, {"split": "nort2875", "path": "data/nort2875-*"}, {"split": "north_alta_nort2875", "path": "data/north_alta_nort2875-*"}, {"split": "orko1234", "path": "data/orko1234-*"}, {"split": "pnar1238", "path": "data/pnar1238-*"}, {"split": "resi1247", "path": "data/resi1247-*"}, {"split": "ruul1235", "path": "data/ruul1235-*"}, {"split": "sadu1234", "path": "data/sadu1234-*"}, {"split": "sanz1248", "path": "data/sanz1248-*"}, {"split": "sout2856", "path": "data/sout2856-*"}, {"split": "stan1290", "path": "data/stan1290-*"}, {"split": "sumi1235", "path": "data/sumi1235-*"}, {"split": "svan1243", "path": "data/svan1243-*"}, {"split": "taba1259", "path": "data/taba1259-*"}, {"split": "texi1237", "path": "data/texi1237-*"}, {"split": "tsim1256", "path": "data/tsim1256-*"}, {"split": "urum1249", "path": "data/urum1249-*"}, {"split": "vera1241", "path": "data/vera1241-*"}, {"split": "yong1270", "path": "data/yong1270-*"}, {"split": "yuca1254", "path": "data/yuca1254-*"}]}], "dataset_info": {"features": [{"name": "words", "sequence": "string"}, {"name": "phones", "sequence": "string"}, {"name": "filename", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "word_details", "struct": [{"name": "start", "sequence": "float64"}, {"name": "stop", "sequence": "float64"}, {"name": "word", "sequence": "string"}]}, {"name": "phone_details", "struct": [{"name": "start", "sequence": "float64"}, {"name": "stop", "sequence": "float64"}, {"name": "word", "sequence": "string"}]}], "splits": [{"name": "ana1239", "num_bytes": 43237074.0, "num_examples": 442}, {"name": "arap1274", "num_bytes": 101016770.0, "num_examples": 783}, {"name": "bain1259", "num_bytes": 46923978.0, "num_examples": 463}, {"name": "beja1238", "num_bytes": 42135983.0, "num_examples": 559}, {"name": "bora1263", "num_bytes": 40033907.0, "num_examples": 281}, {"name": "cabe1245", "num_bytes": 43877112.0, "num_examples": 375}, {"name": "daaki_port1286", "num_bytes": 26033382.0, "num_examples": 250}, {"name": "dolg1241", "num_bytes": 80130814.0, "num_examples": 584}, {"name": "even1259", "num_bytes": 77156608.0, "num_examples": 614}, {"name": "goro1270", "num_bytes": 32997371.0, "num_examples": 345}, {"name": "jeha1242", "num_bytes": 52767349.0, "num_examples": 451}, {"name": "jeju1234", "num_bytes": 2940066.0, "num_examples": 35}, {"name": "kaka1265", "num_bytes": 68707138.0, "num_examples": 513}, {"name": "kama1351", "num_bytes": 97294457.0, "num_examples": 837}, {"name": "komn1238", "num_bytes": 31721772.0, "num_examples": 377}, {"name": "mojeno_trinitario_trin178", "num_bytes": 75376139.0, "num_examples": 412}, {"name": "ngal1292", "num_bytes": 9545737.0, "num_examples": 120}, {"name": "nngg1234", "num_bytes": 14354037.0, "num_examples": 166}, {"name": "nort2641", "num_bytes": 62950976.0, "num_examples": 442}, {"name": "nort2875", "num_bytes": 60560835.0, "num_examples": 672}, {"name": "north_alta_nort2875", "num_bytes": 60561518.0, "num_examples": 672}, {"name": "orko1234", "num_bytes": 25102377.0, "num_examples": 276}, {"name": "pnar1238", "num_bytes": 34278534.0, "num_examples": 128}, {"name": "resi1247", "num_bytes": 147753898.0, "num_examples": 840}, {"name": "ruul1235", "num_bytes": 37756689.0, "num_examples": 372}, {"name": "sadu1234", "num_bytes": 17651755.0, "num_examples": 198}, {"name": "sanz1248", "num_bytes": 20234299.0, "num_examples": 129}, {"name": "sout2856", "num_bytes": 41953458.0, "num_examples": 213}, {"name": "stan1290", "num_bytes": 36776000.0, "num_examples": 411}, {"name": "sumi1235", "num_bytes": 16617938.0, "num_examples": 187}, {"name": "svan1243", "num_bytes": 65253777.0, "num_examples": 423}, {"name": "taba1259", "num_bytes": 24384756.0, "num_examples": 181}, {"name": "texi1237", "num_bytes": 9125979.0, "num_examples": 106}, {"name": "tsim1256", "num_bytes": 32795528.0, "num_examples": 361}, {"name": "urum1249", "num_bytes": 43314638.0, "num_examples": 289}, {"name": "vera1241", "num_bytes": 67008617.0, "num_examples": 582}, {"name": "yong1270", "num_bytes": 35187002.0, "num_examples": 257}, {"name": "yuca1254", "num_bytes": 40101118.0, "num_examples": 360}], "download_size": 1751886399, "dataset_size": 1765619386.0}} | 2023-12-10T19:37:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fw_frame_labels"
More Information needed | [
"# Dataset Card for \"fw_frame_labels\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fw_frame_labels\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"fw_frame_labels\"\n\nMore Information needed"
] |
5a0fd3473a5ad229d9fb75e0de83d795265609c4 | # 📜 Modern to Shakespearean-Style Phrases Dataset
Welcome to the Modern to Shakespearean-Style Phrases Dataset! This collection features contemporary phrases alongside their whimsical translations into a Shakespearean style. 🎭 It was ingeniously crafted by GPT-4 and is available for a variety of purposes, in accordance with the terms of service provided by OpenAI.
## 📖 Dataset Description
Behold, within this dataset, you shall find an array of modern phrases commonly spoken in our time, juxtaposed with their illustrious transformations into a more Middle English, Old English era, and Shakespearean style. 🗣️ Each entry presents an "input" phrase in modern vernacular, juxtaposed with an "output" phrase translated into a classical literary form. 📜 The dataset is designed to offer a playful linguistic metamorphosis of contemporary expressions into a traditional literary context.
## 🚀 Usage
This wondrous dataset of Modern to Shakespearean-Style Phrases can be harnessed for a myriad of natural language processing (NLP) quests, including but not limited to:
- 🌍 Language translation and style transmutation
- 📝 Creative writing and text generation
- 🧐 Profound linguistic and cultural analysis
- 📚 Enlightening educational endeavors
## 📚 Sources of Definitions
The definitions of these contemporary phrases were gleaned from esteemed sources dedicated to the exploration of modern slang:
- [Parade - Gen Z Slang Words](https://parade.com/1293898/marynliles/gen-z-slang-words/) 📢
- [Pangea Global - Generational Slang Words and Phrases (X, Y, and Z)](https://www.pangea.global/blog/a-list-of-generational-slang-words-and-phrases-x-y-and-z/) 🌏
- [FamilyEducation - A Guide to New Teen Slang and Gen Z Slang Trends of 2023](https://www.familyeducation.com/teens/social-development/communication/a-guide-to-new-teen-slang-and-gen-z-slang-trends-of-2023) 📆
- [Sassy Mama HK - Parenting Slang Terms for Tweens and Teens](https://www.sassymamahk.com/parenting-slang-terms-tweens-teens/) 👶
Feel free to explore these sources for further insights into the origins and contexts of these phrases.
## 📝 License
This dataset is offered under the terms of service provided by OpenAI. For detailed information on usage and licensing, please refer to OpenAI's terms of service.
| harpreetsahota/modern-to-shakesperean-translation | [
"region:us"
] | 2023-12-10T19:42:51+00:00 | {"dataset_info": {"features": [{"name": "modern", "dtype": "string"}, {"name": "shakespearean", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29477, "num_examples": 274}], "download_size": 22809, "dataset_size": 29477}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-10T21:09:14+00:00 | [] | [] | TAGS
#region-us
| # Modern to Shakespearean-Style Phrases Dataset
Welcome to the Modern to Shakespearean-Style Phrases Dataset! This collection features contemporary phrases alongside their whimsical translations into a Shakespearean style. It was ingeniously crafted by GPT-4 and is available for a variety of purposes, in accordance with the terms of service provided by OpenAI.
## Dataset Description
Behold, within this dataset, you shall find an array of modern phrases commonly spoken in our time, juxtaposed with their illustrious transformations into a more Middle English, Old English era, and Shakespearean style. ️ Each entry presents an "input" phrase in modern vernacular, juxtaposed with an "output" phrase translated into a classical literary form. The dataset is designed to offer a playful linguistic metamorphosis of contemporary expressions into a traditional literary context.
## Usage
This wondrous dataset of Modern to Shakespearean-Style Phrases can be harnessed for a myriad of natural language processing (NLP) quests, including but not limited to:
- Language translation and style transmutation
- Creative writing and text generation
- Profound linguistic and cultural analysis
- Enlightening educational endeavors
## Sources of Definitions
The definitions of these contemporary phrases were gleaned from esteemed sources dedicated to the exploration of modern slang:
- Parade - Gen Z Slang Words
- Pangea Global - Generational Slang Words and Phrases (X, Y, and Z)
- FamilyEducation - A Guide to New Teen Slang and Gen Z Slang Trends of 2023
- Sassy Mama HK - Parenting Slang Terms for Tweens and Teens
Feel free to explore these sources for further insights into the origins and contexts of these phrases.
## License
This dataset is offered under the terms of service provided by OpenAI. For detailed information on usage and licensing, please refer to OpenAI's terms of service.
| [
"# Modern to Shakespearean-Style Phrases Dataset\n\nWelcome to the Modern to Shakespearean-Style Phrases Dataset! This collection features contemporary phrases alongside their whimsical translations into a Shakespearean style. It was ingeniously crafted by GPT-4 and is available for a variety of purposes, in accordance with the terms of service provided by OpenAI.",
"## Dataset Description\n\nBehold, within this dataset, you shall find an array of modern phrases commonly spoken in our time, juxtaposed with their illustrious transformations into a more Middle English, Old English era, and Shakespearean style. ️ Each entry presents an \"input\" phrase in modern vernacular, juxtaposed with an \"output\" phrase translated into a classical literary form. The dataset is designed to offer a playful linguistic metamorphosis of contemporary expressions into a traditional literary context.",
"## Usage\n\nThis wondrous dataset of Modern to Shakespearean-Style Phrases can be harnessed for a myriad of natural language processing (NLP) quests, including but not limited to:\n\n- Language translation and style transmutation \n- Creative writing and text generation \n- Profound linguistic and cultural analysis \n- Enlightening educational endeavors",
"## Sources of Definitions\n\nThe definitions of these contemporary phrases were gleaned from esteemed sources dedicated to the exploration of modern slang:\n\n- Parade - Gen Z Slang Words \n- Pangea Global - Generational Slang Words and Phrases (X, Y, and Z) \n- FamilyEducation - A Guide to New Teen Slang and Gen Z Slang Trends of 2023 \n- Sassy Mama HK - Parenting Slang Terms for Tweens and Teens \n\nFeel free to explore these sources for further insights into the origins and contexts of these phrases.",
"## License\n\nThis dataset is offered under the terms of service provided by OpenAI. For detailed information on usage and licensing, please refer to OpenAI's terms of service."
] | [
"TAGS\n#region-us \n",
"# Modern to Shakespearean-Style Phrases Dataset\n\nWelcome to the Modern to Shakespearean-Style Phrases Dataset! This collection features contemporary phrases alongside their whimsical translations into a Shakespearean style. It was ingeniously crafted by GPT-4 and is available for a variety of purposes, in accordance with the terms of service provided by OpenAI.",
"## Dataset Description\n\nBehold, within this dataset, you shall find an array of modern phrases commonly spoken in our time, juxtaposed with their illustrious transformations into a more Middle English, Old English era, and Shakespearean style. ️ Each entry presents an \"input\" phrase in modern vernacular, juxtaposed with an \"output\" phrase translated into a classical literary form. The dataset is designed to offer a playful linguistic metamorphosis of contemporary expressions into a traditional literary context.",
"## Usage\n\nThis wondrous dataset of Modern to Shakespearean-Style Phrases can be harnessed for a myriad of natural language processing (NLP) quests, including but not limited to:\n\n- Language translation and style transmutation \n- Creative writing and text generation \n- Profound linguistic and cultural analysis \n- Enlightening educational endeavors",
"## Sources of Definitions\n\nThe definitions of these contemporary phrases were gleaned from esteemed sources dedicated to the exploration of modern slang:\n\n- Parade - Gen Z Slang Words \n- Pangea Global - Generational Slang Words and Phrases (X, Y, and Z) \n- FamilyEducation - A Guide to New Teen Slang and Gen Z Slang Trends of 2023 \n- Sassy Mama HK - Parenting Slang Terms for Tweens and Teens \n\nFeel free to explore these sources for further insights into the origins and contexts of these phrases.",
"## License\n\nThis dataset is offered under the terms of service provided by OpenAI. For detailed information on usage and licensing, please refer to OpenAI's terms of service."
] | [
6,
83,
121,
77,
129,
38
] | [
"passage: TAGS\n#region-us \n# Modern to Shakespearean-Style Phrases Dataset\n\nWelcome to the Modern to Shakespearean-Style Phrases Dataset! This collection features contemporary phrases alongside their whimsical translations into a Shakespearean style. It was ingeniously crafted by GPT-4 and is available for a variety of purposes, in accordance with the terms of service provided by OpenAI.## Dataset Description\n\nBehold, within this dataset, you shall find an array of modern phrases commonly spoken in our time, juxtaposed with their illustrious transformations into a more Middle English, Old English era, and Shakespearean style. ️ Each entry presents an \"input\" phrase in modern vernacular, juxtaposed with an \"output\" phrase translated into a classical literary form. The dataset is designed to offer a playful linguistic metamorphosis of contemporary expressions into a traditional literary context.## Usage\n\nThis wondrous dataset of Modern to Shakespearean-Style Phrases can be harnessed for a myriad of natural language processing (NLP) quests, including but not limited to:\n\n- Language translation and style transmutation \n- Creative writing and text generation \n- Profound linguistic and cultural analysis \n- Enlightening educational endeavors## Sources of Definitions\n\nThe definitions of these contemporary phrases were gleaned from esteemed sources dedicated to the exploration of modern slang:\n\n- Parade - Gen Z Slang Words \n- Pangea Global - Generational Slang Words and Phrases (X, Y, and Z) \n- FamilyEducation - A Guide to New Teen Slang and Gen Z Slang Trends of 2023 \n- Sassy Mama HK - Parenting Slang Terms for Tweens and Teens \n\nFeel free to explore these sources for further insights into the origins and contexts of these phrases.## License\n\nThis dataset is offered under the terms of service provided by OpenAI. For detailed information on usage and licensing, please refer to OpenAI's terms of service."
] |
84b49e225a484b7552ace3df29eaba6466badca4 |
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-deepseek-67b-v15.1](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T20:13:41.089487](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1/blob/main/results_2023-12-10T20-13-41.089487.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7036058129176036,
"acc_stderr": 0.03028453159020021,
"acc_norm": 0.705307528908225,
"acc_norm_stderr": 0.030895027239583782,
"mc1": 0.39167686658506734,
"mc1_stderr": 0.017087795881769625,
"mc2": 0.5441532764532347,
"mc2_stderr": 0.015072690852418868
},
"harness|arc:challenge|25": {
"acc": 0.6527303754266212,
"acc_stderr": 0.013913034529620451,
"acc_norm": 0.6766211604095563,
"acc_norm_stderr": 0.013669421630012127
},
"harness|hellaswag|10": {
"acc": 0.6784505078669588,
"acc_stderr": 0.004661165425661981,
"acc_norm": 0.8648675562636925,
"acc_norm_stderr": 0.0034116630716511135
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7509433962264151,
"acc_stderr": 0.02661648298050171,
"acc_norm": 0.7509433962264151,
"acc_norm_stderr": 0.02661648298050171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795717,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795717
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6595744680851063,
"acc_stderr": 0.03097669299853443,
"acc_norm": 0.6595744680851063,
"acc_norm_stderr": 0.03097669299853443
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5350877192982456,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.5350877192982456,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6620689655172414,
"acc_stderr": 0.039417076320648906,
"acc_norm": 0.6620689655172414,
"acc_norm_stderr": 0.039417076320648906
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.021732540689329286,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.021732540689329286
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5911330049261084,
"acc_stderr": 0.03459058815883233,
"acc_norm": 0.5911330049261084,
"acc_norm_stderr": 0.03459058815883233
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.031234752377721175,
"acc_norm": 0.8,
"acc_norm_stderr": 0.031234752377721175
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8838383838383839,
"acc_stderr": 0.022828881775249377,
"acc_norm": 0.8838383838383839,
"acc_norm_stderr": 0.022828881775249377
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9430051813471503,
"acc_stderr": 0.016731085293607558,
"acc_norm": 0.9430051813471503,
"acc_norm_stderr": 0.016731085293607558
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.717948717948718,
"acc_stderr": 0.022815813098896607,
"acc_norm": 0.717948717948718,
"acc_norm_stderr": 0.022815813098896607
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8025210084033614,
"acc_stderr": 0.025859164122051456,
"acc_norm": 0.8025210084033614,
"acc_norm_stderr": 0.025859164122051456
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.44370860927152317,
"acc_stderr": 0.04056527902281732,
"acc_norm": 0.44370860927152317,
"acc_norm_stderr": 0.04056527902281732
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.012809780081878929,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.012809780081878929
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.03324708911809117,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.03324708911809117
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.01886951464665893,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.01886951464665893
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9029535864978903,
"acc_stderr": 0.01926932302564026,
"acc_norm": 0.9029535864978903,
"acc_norm_stderr": 0.01926932302564026
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.02737309550054019,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.02737309550054019
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615771,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.9029126213592233,
"acc_stderr": 0.02931596291881347,
"acc_norm": 0.9029126213592233,
"acc_norm_stderr": 0.02931596291881347
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9102564102564102,
"acc_stderr": 0.01872430174194166,
"acc_norm": 0.9102564102564102,
"acc_norm_stderr": 0.01872430174194166
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8914431673052363,
"acc_stderr": 0.011124283175851183,
"acc_norm": 0.8914431673052363,
"acc_norm_stderr": 0.011124283175851183
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7658959537572254,
"acc_stderr": 0.022797110278071128,
"acc_norm": 0.7658959537572254,
"acc_norm_stderr": 0.022797110278071128
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45139664804469276,
"acc_stderr": 0.016643307372315872,
"acc_norm": 0.45139664804469276,
"acc_norm_stderr": 0.016643307372315872
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7909967845659164,
"acc_stderr": 0.02309314039837422,
"acc_norm": 0.7909967845659164,
"acc_norm_stderr": 0.02309314039837422
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.020736358408060006,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.020736358408060006
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5391134289439374,
"acc_stderr": 0.012731102790504519,
"acc_norm": 0.5391134289439374,
"acc_norm_stderr": 0.012731102790504519
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7279411764705882,
"acc_stderr": 0.027033041151681456,
"acc_norm": 0.7279411764705882,
"acc_norm_stderr": 0.027033041151681456
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856926,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856926
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7795918367346939,
"acc_stderr": 0.02653704531214529,
"acc_norm": 0.7795918367346939,
"acc_norm_stderr": 0.02653704531214529
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.023729830881018526,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.023729830881018526
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.038444531817709175,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.038444531817709175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.39167686658506734,
"mc1_stderr": 0.017087795881769625,
"mc2": 0.5441532764532347,
"mc2_stderr": 0.015072690852418868
},
"harness|winogrande|5": {
"acc": 0.8476716653512234,
"acc_stderr": 0.010099208246065614
},
"harness|gsm8k|5": {
"acc": 0.6694465504169825,
"acc_stderr": 0.012957496367085024
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1 | [
"region:us"
] | 2023-12-10T20:16:26+00:00 | {"pretty_name": "Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-deepseek-67b-v15.1](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-10T20:13:41.089487](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1/blob/main/results_2023-12-10T20-13-41.089487.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7036058129176036,\n \"acc_stderr\": 0.03028453159020021,\n \"acc_norm\": 0.705307528908225,\n \"acc_norm_stderr\": 0.030895027239583782,\n \"mc1\": 0.39167686658506734,\n \"mc1_stderr\": 0.017087795881769625,\n \"mc2\": 0.5441532764532347,\n \"mc2_stderr\": 0.015072690852418868\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6527303754266212,\n \"acc_stderr\": 0.013913034529620451,\n \"acc_norm\": 0.6766211604095563,\n \"acc_norm_stderr\": 0.013669421630012127\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6784505078669588,\n \"acc_stderr\": 0.004661165425661981,\n \"acc_norm\": 0.8648675562636925,\n \"acc_norm_stderr\": 0.0034116630716511135\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.03317672787533157,\n \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.03317672787533157\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7509433962264151,\n \"acc_stderr\": 0.02661648298050171,\n \"acc_norm\": 0.7509433962264151,\n \"acc_norm_stderr\": 0.02661648298050171\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n \"acc_stderr\": 0.03167473383795717,\n \"acc_norm\": 0.8263888888888888,\n \"acc_norm_stderr\": 0.03167473383795717\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6595744680851063,\n \"acc_stderr\": 0.03097669299853443,\n \"acc_norm\": 0.6595744680851063,\n \"acc_norm_stderr\": 0.03097669299853443\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5350877192982456,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.5350877192982456,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6620689655172414,\n \"acc_stderr\": 0.039417076320648906,\n \"acc_norm\": 0.6620689655172414,\n \"acc_norm_stderr\": 0.039417076320648906\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.021732540689329286,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.021732540689329286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5911330049261084,\n \"acc_stderr\": 0.03459058815883233,\n \"acc_norm\": 0.5911330049261084,\n \"acc_norm_stderr\": 0.03459058815883233\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.031234752377721175,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.031234752377721175\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8838383838383839,\n \"acc_stderr\": 0.022828881775249377,\n \"acc_norm\": 0.8838383838383839,\n \"acc_norm_stderr\": 0.022828881775249377\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.016731085293607558,\n \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.016731085293607558\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.717948717948718,\n \"acc_stderr\": 0.022815813098896607,\n \"acc_norm\": 0.717948717948718,\n \"acc_norm_stderr\": 0.022815813098896607\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8025210084033614,\n \"acc_stderr\": 0.025859164122051456,\n \"acc_norm\": 0.8025210084033614,\n \"acc_norm_stderr\": 0.025859164122051456\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281732,\n \"acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281732\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9009174311926605,\n \"acc_stderr\": 0.012809780081878929,\n \"acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.012809780081878929\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.03324708911809117,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.03324708911809117\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9215686274509803,\n \"acc_stderr\": 0.01886951464665893,\n \"acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.01886951464665893\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9029535864978903,\n \"acc_stderr\": 0.01926932302564026,\n \"acc_norm\": 0.9029535864978903,\n \"acc_norm_stderr\": 0.01926932302564026\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n \"acc_stderr\": 0.02737309550054019,\n \"acc_norm\": 0.7892376681614349,\n \"acc_norm_stderr\": 0.02737309550054019\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615771,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.9029126213592233,\n \"acc_stderr\": 0.02931596291881347,\n \"acc_norm\": 0.9029126213592233,\n \"acc_norm_stderr\": 0.02931596291881347\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9102564102564102,\n \"acc_stderr\": 0.01872430174194166,\n \"acc_norm\": 0.9102564102564102,\n \"acc_norm_stderr\": 0.01872430174194166\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8914431673052363,\n \"acc_stderr\": 0.011124283175851183,\n \"acc_norm\": 0.8914431673052363,\n \"acc_norm_stderr\": 0.011124283175851183\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7658959537572254,\n \"acc_stderr\": 0.022797110278071128,\n \"acc_norm\": 0.7658959537572254,\n \"acc_norm_stderr\": 0.022797110278071128\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.45139664804469276,\n \"acc_stderr\": 0.016643307372315872,\n \"acc_norm\": 0.45139664804469276,\n \"acc_norm_stderr\": 0.016643307372315872\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n \"acc_stderr\": 0.02309314039837422,\n \"acc_norm\": 0.7909967845659164,\n \"acc_norm_stderr\": 0.02309314039837422\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060006,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060006\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5391134289439374,\n \"acc_stderr\": 0.012731102790504519,\n \"acc_norm\": 0.5391134289439374,\n \"acc_norm_stderr\": 0.012731102790504519\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7279411764705882,\n \"acc_stderr\": 0.027033041151681456,\n \"acc_norm\": 0.7279411764705882,\n \"acc_norm_stderr\": 0.027033041151681456\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856926,\n \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856926\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7795918367346939,\n \"acc_stderr\": 0.02653704531214529,\n \"acc_norm\": 0.7795918367346939,\n \"acc_norm_stderr\": 0.02653704531214529\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n \"acc_stderr\": 0.023729830881018526,\n \"acc_norm\": 0.8706467661691543,\n \"acc_norm_stderr\": 0.023729830881018526\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n \"acc_stderr\": 0.038444531817709175,\n \"acc_norm\": 0.5783132530120482,\n \"acc_norm_stderr\": 0.038444531817709175\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.39167686658506734,\n \"mc1_stderr\": 0.017087795881769625,\n \"mc2\": 0.5441532764532347,\n \"mc2_stderr\": 0.015072690852418868\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8476716653512234,\n \"acc_stderr\": 0.010099208246065614\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6694465504169825,\n \"acc_stderr\": 0.012957496367085024\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|arc:challenge|25_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|gsm8k|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hellaswag|10_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["**/details_harness|winogrande|5_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-10T20-13-41.089487.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_10T20_13_41.089487", "path": ["results_2023-12-10T20-13-41.089487.parquet"]}, {"split": "latest", "path": ["results_2023-12-10T20-13-41.089487.parquet"]}]}]} | 2023-12-10T20:17:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-10T20:13:41.089487(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T20:13:41.089487(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T20:13:41.089487(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-10T20:13:41.089487(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
288c1e9b7445e5f1c84b00022bf7c92500c17593 | # Dataset Card for "E_normal_over70_add"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | syp1229/E_normal_over70_add | [
"region:us"
] | 2023-12-10T20:36:14+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "string"}, {"name": "sample_rate", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "scriptId", "dtype": "int64"}, {"name": "fileNm", "dtype": "string"}, {"name": "recrdTime", "dtype": "float64"}, {"name": "recrdQuality", "dtype": "string"}, {"name": "recrdDt", "dtype": "string"}, {"name": "scriptSetNo", "dtype": "string"}, {"name": "recrdEnvrn", "dtype": "string"}, {"name": "colctUnitCode", "dtype": "string"}, {"name": "cityCode", "dtype": "string"}, {"name": "recrdUnit", "dtype": "string"}, {"name": "convrsThema", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "recorderId", "dtype": "string"}, {"name": "age", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8519858998, "num_examples": 7001}, {"name": "test", "num_bytes": 5515199, "num_examples": 5}], "download_size": 1890502951, "dataset_size": 8525374197}} | 2023-12-10T20:40:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "E_normal_over70_add"
More Information needed | [
"# Dataset Card for \"E_normal_over70_add\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"E_normal_over70_add\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"E_normal_over70_add\"\n\nMore Information needed"
] |
0a345200f47d78f578f1c1102c7e9631c8e9d87c | # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-mistral-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-b-v2-test-paraphrase-mistral-7b | [
"region:us"
] | 2023-12-10T21:06:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "paraphrase", "dtype": "string"}, {"name": "mistral-7b_estimated_loss", "dtype": "float64"}, {"name": "mistral-7b_mean_lowest25", "dtype": "float64"}, {"name": "mistral-7b_mean_highest25", "dtype": "float64"}, {"name": "mistral-7b_max", "dtype": "float64"}, {"name": "mistral-7b_min", "dtype": "float64"}, {"name": "mistral-7b_range", "dtype": "float64"}, {"name": "mistral-7b_mean", "dtype": "float64"}, {"name": "mistral-7b_std", "dtype": "float64"}, {"name": "mistral-7b_entropy", "dtype": "float64"}, {"name": "mistral-7b_kurtosis", "dtype": "float64"}, {"name": "mistral-7b_skewness", "dtype": "float64"}, {"name": "mistral-7b_perplexity", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 8494381, "num_examples": 3000}], "download_size": 4042653, "dataset_size": 8494381}} | 2023-12-10T21:06:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-mistral-7b"
More Information needed | [
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] |
0c7d1df41c8148fbe7640de3cd6af140039ffc47 | # Dataset Card for "semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b | [
"region:us"
] | 2023-12-10T21:47:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "paraphrase", "dtype": "string"}, {"name": "mistral-7b_estimated_loss", "dtype": "float64"}, {"name": "mistral-7b_mean_lowest25", "dtype": "float64"}, {"name": "mistral-7b_mean_highest25", "dtype": "float64"}, {"name": "mistral-7b_max", "dtype": "float64"}, {"name": "mistral-7b_min", "dtype": "float64"}, {"name": "mistral-7b_range", "dtype": "float64"}, {"name": "mistral-7b_mean", "dtype": "float64"}, {"name": "mistral-7b_std", "dtype": "float64"}, {"name": "mistral-7b_entropy", "dtype": "float64"}, {"name": "mistral-7b_kurtosis", "dtype": "float64"}, {"name": "mistral-7b_skewness", "dtype": "float64"}, {"name": "mistral-7b_perplexity", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 18057049, "num_examples": 5000}], "download_size": 10618693, "dataset_size": 18057049}} | 2023-12-10T21:47:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b"
More Information needed | [
"# Dataset Card for \"semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] | [
6,
34
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-a-mono-v2-test-paraphrase-mistral-7b\"\n\nMore Information needed"
] |
37d28b94fe9fa5014ca1b23a2dda7537be8455c5 | <div align="center">
<h2>AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis</h2>
_**[Susan Liang](https://liangsusan-git.github.io/), [Chao Huang](https://wikichao.github.io/), [Yapeng Tian](https://www.yapengtian.com/), [Anurag Kumar](https://anuragkr90.github.io/), [Chenliang Xu](https://www.cs.rochester.edu/~cxu22/)**_
</div>
### RWAVS Dataset
We provide the Real-World Audio-Visual Scene (RWAVS) Dataset.
1. The dataset can be downloaded from this Hugging Face repository.
2. After you download the dataset, you can decompress the `RWAVS_Release.zip`.
```
unzip RWAVS_Release.zip
cd release/
```
3. The data is organized with the following directory structure.
```
./release/
├── 1
│ ├── binaural_syn_re.wav
│ ├── feats_train.pkl
│ ├── feats_val.pkl
│ ├── frames
│ │ ├── 00001.png
| | ├── ...
│ │ ├── 00616.png
│ ├── source_syn_re.wav
│ ├── transforms_scale_train.json
│ ├── transforms_scale_val.json
│ ├── transforms_train.json
│ └── transforms_val.json
├── ...
├── 13
└── position.json
```
The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide
* `transforms_train.json`: camera poses for training.
* `transforms_val.json`: camera poses for evaluation. We split the data into `train` and `val` subsets with 80% data for training and the rest for evaluation.
* `transforms_scale_train.json`: normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.
* `transforms_scale_val.json`: normalized camera poses for evaluation.
* `frames`: corresponding video frames for each camera pose.
* `source_syn_re.wav`: single-channel audio emitted by the sound source.
* `binaural_syn_re.wav`: two-channel audio captured by the binaural microphone. We synchronize `source_syn_re.wav` and `binaural_syn_re.wav` and resample them to $22050$ Hz.
* `feats_train.pkl`: extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.
* `feats_val.pkl`: extracted vision and depth features at each camera pose for inference.
* `position.json`: normalized 3D coordinates of the sound source.
Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.
### Citation
```bib
@inproceedings{liang23avnerf,
author = {Liang, Susan and Huang, Chao and Tian, Yapeng and Kumar, Anurag and Xu, Chenliang},
booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
title = {AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis},
year = {2023}
}
```
### Contact
If you have any comments or questions, feel free to contact [Susan Liang](mailto:[email protected]) and [Chao Huang](mailto:[email protected]). | susanliang/RWAVS | [
"region:us"
] | 2023-12-10T21:59:46+00:00 | {} | 2023-12-11T00:18:11+00:00 | [] | [] | TAGS
#region-us
| <div align="center">
<h2>AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis</h2>
_Susan Liang, Chao Huang, Yapeng Tian, Anurag Kumar, Chenliang Xu_
</div>
### RWAVS Dataset
We provide the Real-World Audio-Visual Scene (RWAVS) Dataset.
1. The dataset can be downloaded from this Hugging Face repository.
2. After you download the dataset, you can decompress the 'RWAVS_Release.zip'.
3. The data is organized with the following directory structure.
The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide
* 'transforms_train.json': camera poses for training.
* 'transforms_val.json': camera poses for evaluation. We split the data into 'train' and 'val' subsets with 80% data for training and the rest for evaluation.
* 'transforms_scale_train.json': normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.
* 'transforms_scale_val.json': normalized camera poses for evaluation.
* 'frames': corresponding video frames for each camera pose.
* 'source_syn_re.wav': single-channel audio emitted by the sound source.
* 'binaural_syn_re.wav': two-channel audio captured by the binaural microphone. We synchronize 'source_syn_re.wav' and 'binaural_syn_re.wav' and resample them to $22050$ Hz.
* 'feats_train.pkl': extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.
* 'feats_val.pkl': extracted vision and depth features at each camera pose for inference.
* 'URL': normalized 3D coordinates of the sound source.
Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.
### Contact
If you have any comments or questions, feel free to contact Susan Liang and Chao Huang. | [
"### RWAVS Dataset\nWe provide the Real-World Audio-Visual Scene (RWAVS) Dataset.\n\n1. The dataset can be downloaded from this Hugging Face repository.\n\n2. After you download the dataset, you can decompress the 'RWAVS_Release.zip'.\n \n\n3. The data is organized with the following directory structure.\n \n\n The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide\n * 'transforms_train.json': camera poses for training.\n * 'transforms_val.json': camera poses for evaluation. We split the data into 'train' and 'val' subsets with 80% data for training and the rest for evaluation.\n * 'transforms_scale_train.json': normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.\n * 'transforms_scale_val.json': normalized camera poses for evaluation.\n * 'frames': corresponding video frames for each camera pose.\n * 'source_syn_re.wav': single-channel audio emitted by the sound source.\n * 'binaural_syn_re.wav': two-channel audio captured by the binaural microphone. We synchronize 'source_syn_re.wav' and 'binaural_syn_re.wav' and resample them to $22050$ Hz.\n * 'feats_train.pkl': extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.\n * 'feats_val.pkl': extracted vision and depth features at each camera pose for inference.\n * 'URL': normalized 3D coordinates of the sound source.\n\n Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.",
"### Contact\nIf you have any comments or questions, feel free to contact Susan Liang and Chao Huang."
] | [
"TAGS\n#region-us \n",
"### RWAVS Dataset\nWe provide the Real-World Audio-Visual Scene (RWAVS) Dataset.\n\n1. The dataset can be downloaded from this Hugging Face repository.\n\n2. After you download the dataset, you can decompress the 'RWAVS_Release.zip'.\n \n\n3. The data is organized with the following directory structure.\n \n\n The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide\n * 'transforms_train.json': camera poses for training.\n * 'transforms_val.json': camera poses for evaluation. We split the data into 'train' and 'val' subsets with 80% data for training and the rest for evaluation.\n * 'transforms_scale_train.json': normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.\n * 'transforms_scale_val.json': normalized camera poses for evaluation.\n * 'frames': corresponding video frames for each camera pose.\n * 'source_syn_re.wav': single-channel audio emitted by the sound source.\n * 'binaural_syn_re.wav': two-channel audio captured by the binaural microphone. We synchronize 'source_syn_re.wav' and 'binaural_syn_re.wav' and resample them to $22050$ Hz.\n * 'feats_train.pkl': extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.\n * 'feats_val.pkl': extracted vision and depth features at each camera pose for inference.\n * 'URL': normalized 3D coordinates of the sound source.\n\n Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.",
"### Contact\nIf you have any comments or questions, feel free to contact Susan Liang and Chao Huang."
] | [
6,
466,
23
] | [
"passage: TAGS\n#region-us \n### RWAVS Dataset\nWe provide the Real-World Audio-Visual Scene (RWAVS) Dataset.\n\n1. The dataset can be downloaded from this Hugging Face repository.\n\n2. After you download the dataset, you can decompress the 'RWAVS_Release.zip'.\n \n\n3. The data is organized with the following directory structure.\n \n\n The dataset contains 13 scenes indexed from 1 to 13. For each scene, we provide\n * 'transforms_train.json': camera poses for training.\n * 'transforms_val.json': camera poses for evaluation. We split the data into 'train' and 'val' subsets with 80% data for training and the rest for evaluation.\n * 'transforms_scale_train.json': normalized camera poses for training. We scale 3D coordindates to $[-1, 1]^3$.\n * 'transforms_scale_val.json': normalized camera poses for evaluation.\n * 'frames': corresponding video frames for each camera pose.\n * 'source_syn_re.wav': single-channel audio emitted by the sound source.\n * 'binaural_syn_re.wav': two-channel audio captured by the binaural microphone. We synchronize 'source_syn_re.wav' and 'binaural_syn_re.wav' and resample them to $22050$ Hz.\n * 'feats_train.pkl': extracted vision and depth features at each camera pose for training. We rely on V-NeRF to synthesize vision and depth images for each camera pose. We then use a pre-trained encoder to extract features from rendered images.\n * 'feats_val.pkl': extracted vision and depth features at each camera pose for inference.\n * 'URL': normalized 3D coordinates of the sound source.\n\n Please note that some frames may not have corresponding camera poses because COLMAP fails to estimate the camera parameters of these frames.### Contact\nIf you have any comments or questions, feel free to contact Susan Liang and Chao Huang."
] |
531e83d153abe68ed0adcf52c85eacc9be6507fc | # Pretokenized TinyStories
[Based on roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
* [**361 Tokens**](https://huggingface.co/datasets/enio/TinyStories/tree/main/tok361)
* [**4k Tokens**](https://huggingface.co/datasets/enio/TinyStories/tree/main/tok4096)
* [**32K Tokens**](https://huggingface.co/datasets/enio/TinyStories/tree/main/tok32000)
Pretokenized pretokenized to speed up training on:
* [karpathy/llama2.c](https://github.com/karpathy/llama2.c)
* [EN10/BabyLlama](https://github.com/EN10/BabyLlama) | enio/TinyStories | [
"task_categories:text-generation",
"language:en",
"license:mit",
"region:us"
] | 2023-12-10T22:04:02+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["text-generation"]} | 2023-12-12T16:24:25+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #license-mit #region-us
| # Pretokenized TinyStories
Based on roneneldan/TinyStories
* 361 Tokens
* 4k Tokens
* 32K Tokens
Pretokenized pretokenized to speed up training on:
* karpathy/llama2.c
* EN10/BabyLlama | [
"# Pretokenized TinyStories\nBased on roneneldan/TinyStories\n\n* 361 Tokens\n* 4k Tokens\n* 32K Tokens\n\nPretokenized pretokenized to speed up training on:\n* karpathy/llama2.c\n* EN10/BabyLlama"
] | [
"TAGS\n#task_categories-text-generation #language-English #license-mit #region-us \n",
"# Pretokenized TinyStories\nBased on roneneldan/TinyStories\n\n* 361 Tokens\n* 4k Tokens\n* 32K Tokens\n\nPretokenized pretokenized to speed up training on:\n* karpathy/llama2.c\n* EN10/BabyLlama"
] | [
26,
65
] | [
"passage: TAGS\n#task_categories-text-generation #language-English #license-mit #region-us \n# Pretokenized TinyStories\nBased on roneneldan/TinyStories\n\n* 361 Tokens\n* 4k Tokens\n* 32K Tokens\n\nPretokenized pretokenized to speed up training on:\n* karpathy/llama2.c\n* EN10/BabyLlama"
] |
e9f1de22a753b1abe3f2068220eef09710f1a953 |
# EditPackFT
EditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:
1. `old_contents` the code before the edit
2. `instruction` the instruction to transform the `before` code into the `after` code
3. `new_contents` the code after the edit
4. `content` a pre-formatted training window that can be used to train an LLM with prompts in the format of: `<before><instruction><after>`
This dataset has been filtered from CommitPackFT. For more detail, [see our paper](https://federico.codes/assets/papers/canitedit.pdf), and our [GitHub repository](https://github.com/nuprl/CanItEdit/tree/main/editpackft). | nuprl/EditPackFT | [
"task_categories:text-generation",
"language:code",
"license:mit",
"code",
"region:us"
] | 2023-12-10T22:30:31+00:00 | {"language": ["code"], "license": "mit", "task_categories": ["text-generation"], "pretty_name": "CanItEdit", "dataset_info": {"features": [{"name": "commit", "dtype": "string"}, {"name": "old_file", "dtype": "string"}, {"name": "new_file", "dtype": "string"}, {"name": "old_contents", "dtype": "string"}, {"name": "new_contents", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "message", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "repos", "dtype": "string"}, {"name": "ndiff", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113752028, "num_examples": 22602}], "download_size": 48124127, "dataset_size": 113752028}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code"]} | 2023-12-14T20:13:28+00:00 | [] | [
"code"
] | TAGS
#task_categories-text-generation #language-code #license-mit #code #region-us
|
# EditPackFT
EditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:
1. 'old_contents' the code before the edit
2. 'instruction' the instruction to transform the 'before' code into the 'after' code
3. 'new_contents' the code after the edit
4. 'content' a pre-formatted training window that can be used to train an LLM with prompts in the format of: '<before><instruction><after>'
This dataset has been filtered from CommitPackFT. For more detail, see our paper, and our GitHub repository. | [
"# EditPackFT\nEditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:\n1. 'old_contents' the code before the edit\n2. 'instruction' the instruction to transform the 'before' code into the 'after' code\n3. 'new_contents' the code after the edit\n4. 'content' a pre-formatted training window that can be used to train an LLM with prompts in the format of: '<before><instruction><after>'\n\nThis dataset has been filtered from CommitPackFT. For more detail, see our paper, and our GitHub repository."
] | [
"TAGS\n#task_categories-text-generation #language-code #license-mit #code #region-us \n",
"# EditPackFT\nEditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:\n1. 'old_contents' the code before the edit\n2. 'instruction' the instruction to transform the 'before' code into the 'after' code\n3. 'new_contents' the code after the edit\n4. 'content' a pre-formatted training window that can be used to train an LLM with prompts in the format of: '<before><instruction><after>'\n\nThis dataset has been filtered from CommitPackFT. For more detail, see our paper, and our GitHub repository."
] | [
28,
150
] | [
"passage: TAGS\n#task_categories-text-generation #language-code #license-mit #code #region-us \n# EditPackFT\nEditPackFT is a dataset built for training LLMs on the task of instructional code editing. The mail columns are:\n1. 'old_contents' the code before the edit\n2. 'instruction' the instruction to transform the 'before' code into the 'after' code\n3. 'new_contents' the code after the edit\n4. 'content' a pre-formatted training window that can be used to train an LLM with prompts in the format of: '<before><instruction><after>'\n\nThis dataset has been filtered from CommitPackFT. For more detail, see our paper, and our GitHub repository."
] |
7f771eadaf38636b3e9ce6c04d76c1df336b4a92 |
An attempt to imbue a gruff, RocketRaccoon like personality from GoG in the Rocket 3B model. Alpaca formatted dataset generated by ehartford_dolphin-2.2.1-mistral-7b. | TitleOS/rocketraccoon_personality_alpaca | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:mpl-2.0",
"alpaca",
"synthetic data",
"region:us"
] | 2023-12-10T22:30:35+00:00 | {"language": ["en"], "license": "mpl-2.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["alpaca", "synthetic data"]} | 2023-12-19T01:40:13+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #license-mpl-2.0 #alpaca #synthetic data #region-us
|
An attempt to imbue a gruff, RocketRaccoon like personality from GoG in the Rocket 3B model. Alpaca formatted dataset generated by ehartford_dolphin-2.2.1-mistral-7b. | [] | [
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mpl-2.0 #alpaca #synthetic data #region-us \n"
] | [
47
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #license-mpl-2.0 #alpaca #synthetic data #region-us \n"
] |
794c307e4d839b363cf0e021d15d809e19a1945f |
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.02
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/perlthoughts/Chupacabra-7B-v2.02
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.02](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.02) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.02",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T22:45:13.984818](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.02/blob/main/results_2023-12-10T22-45-13.984818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6238868387078775,
"acc_stderr": 0.03276760160523371,
"acc_norm": 0.6244361461402262,
"acc_norm_stderr": 0.033437406083812314,
"mc1": 0.4724602203182375,
"mc1_stderr": 0.017476930190712187,
"mc2": 0.640558000158967,
"mc2_stderr": 0.015227441871796771
},
"harness|arc:challenge|25": {
"acc": 0.6544368600682594,
"acc_stderr": 0.013896938461145682,
"acc_norm": 0.6766211604095563,
"acc_norm_stderr": 0.013669421630012127
},
"harness|hellaswag|10": {
"acc": 0.663114917347142,
"acc_stderr": 0.004716792874433209,
"acc_norm": 0.8389762995419239,
"acc_norm_stderr": 0.0036680163609758335
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.037385206761196686,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.037385206761196686
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.02863723563980089,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.02863723563980089
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201942,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201942
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.02522545028406788,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.02522545028406788
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782648,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782648
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.031584153240477114,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.031584153240477114
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.030088629490217487,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.030088629490217487
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709443,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6384615384615384,
"acc_stderr": 0.024359581465397004,
"acc_norm": 0.6384615384615384,
"acc_norm_stderr": 0.024359581465397004
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.029045600290616255,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.029045600290616255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.03077805742293167,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.03077805742293167
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.015703498348461763,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.015703498348461763
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7932489451476793,
"acc_stderr": 0.026361651668389094,
"acc_norm": 0.7932489451476793,
"acc_norm_stderr": 0.026361651668389094
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229136,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229136
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6870229007633588,
"acc_stderr": 0.04066962905677698,
"acc_norm": 0.6870229007633588,
"acc_norm_stderr": 0.04066962905677698
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516302,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094631,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094631
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6993865030674846,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.6993865030674846,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077823,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077823
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8109833971902938,
"acc_stderr": 0.014000791294406999,
"acc_norm": 0.8109833971902938,
"acc_norm_stderr": 0.014000791294406999
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.025522474632121612,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.025522474632121612
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43798882681564244,
"acc_stderr": 0.01659339422756484,
"acc_norm": 0.43798882681564244,
"acc_norm_stderr": 0.01659339422756484
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.684887459807074,
"acc_stderr": 0.026385273703464485,
"acc_norm": 0.684887459807074,
"acc_norm_stderr": 0.026385273703464485
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6975308641975309,
"acc_stderr": 0.025557653981868062,
"acc_norm": 0.6975308641975309,
"acc_norm_stderr": 0.025557653981868062
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44654498044328556,
"acc_stderr": 0.012697046024399673,
"acc_norm": 0.44654498044328556,
"acc_norm_stderr": 0.012697046024399673
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6323529411764706,
"acc_stderr": 0.029289413409403192,
"acc_norm": 0.6323529411764706,
"acc_norm_stderr": 0.029289413409403192
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6552287581699346,
"acc_stderr": 0.01922832201869664,
"acc_norm": 0.6552287581699346,
"acc_norm_stderr": 0.01922832201869664
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249772,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249772
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.027686913588013024,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.027686913588013024
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4724602203182375,
"mc1_stderr": 0.017476930190712187,
"mc2": 0.640558000158967,
"mc2_stderr": 0.015227441871796771
},
"harness|winogrande|5": {
"acc": 0.7940015785319653,
"acc_stderr": 0.01136647435200883
},
"harness|gsm8k|5": {
"acc": 0.6194086429112965,
"acc_stderr": 0.013373971277729817
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.02 | [
"region:us"
] | 2023-12-10T22:48:04+00:00 | {"pretty_name": "Evaluation run of perlthoughts/Chupacabra-7B-v2.02", "dataset_summary": "Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.02](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.02) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.02\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-10T22:45:13.984818](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.02/blob/main/results_2023-12-10T22-45-13.984818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6238868387078775,\n \"acc_stderr\": 0.03276760160523371,\n \"acc_norm\": 0.6244361461402262,\n \"acc_norm_stderr\": 0.033437406083812314,\n \"mc1\": 0.4724602203182375,\n \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.640558000158967,\n \"mc2_stderr\": 0.015227441871796771\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6544368600682594,\n \"acc_stderr\": 0.013896938461145682,\n \"acc_norm\": 0.6766211604095563,\n \"acc_norm_stderr\": 0.013669421630012127\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.663114917347142,\n \"acc_stderr\": 0.004716792874433209,\n \"acc_norm\": 0.8389762995419239,\n \"acc_norm_stderr\": 0.0036680163609758335\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201942,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201942\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3994708994708995,\n \"acc_stderr\": 0.02522545028406788,\n \"acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.02522545028406788\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782648,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782648\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.031584153240477114,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.031584153240477114\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.030088629490217487,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.030088629490217487\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709443,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6384615384615384,\n \"acc_stderr\": 0.024359581465397004,\n \"acc_norm\": 0.6384615384615384,\n \"acc_norm_stderr\": 0.024359581465397004\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.03077805742293167,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.03077805742293167\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8403669724770643,\n \"acc_stderr\": 0.015703498348461763,\n \"acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.015703498348461763\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.026361651668389094,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.026361651668389094\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.030769352008229136,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.030769352008229136\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6870229007633588,\n \"acc_stderr\": 0.04066962905677698,\n \"acc_norm\": 0.6870229007633588,\n \"acc_norm_stderr\": 0.04066962905677698\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.04236511258094631,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.04236511258094631\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077823,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077823\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294406999,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294406999\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.025522474632121612,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.025522474632121612\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43798882681564244,\n \"acc_stderr\": 0.01659339422756484,\n \"acc_norm\": 0.43798882681564244,\n \"acc_norm_stderr\": 0.01659339422756484\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.02625605383571896,\n \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.02625605383571896\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n \"acc_stderr\": 0.026385273703464485,\n \"acc_norm\": 0.684887459807074,\n \"acc_norm_stderr\": 0.026385273703464485\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6975308641975309,\n \"acc_stderr\": 0.025557653981868062,\n \"acc_norm\": 0.6975308641975309,\n \"acc_norm_stderr\": 0.025557653981868062\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44654498044328556,\n \"acc_stderr\": 0.012697046024399673,\n \"acc_norm\": 0.44654498044328556,\n \"acc_norm_stderr\": 0.012697046024399673\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6552287581699346,\n \"acc_stderr\": 0.01922832201869664,\n \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.01922832201869664\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249772,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249772\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.027686913588013024,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.027686913588013024\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4724602203182375,\n \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.640558000158967,\n \"mc2_stderr\": 0.015227441871796771\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7940015785319653,\n \"acc_stderr\": 0.01136647435200883\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6194086429112965,\n \"acc_stderr\": 0.013373971277729817\n }\n}\n```", "repo_url": "https://huggingface.co/perlthoughts/Chupacabra-7B-v2.02", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|arc:challenge|25_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|gsm8k|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hellaswag|10_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T22-45-13.984818.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["**/details_harness|winogrande|5_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-10T22-45-13.984818.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_10T22_45_13.984818", "path": ["results_2023-12-10T22-45-13.984818.parquet"]}, {"split": "latest", "path": ["results_2023-12-10T22-45-13.984818.parquet"]}]}]} | 2023-12-10T22:48:47+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.02
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.02 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-10T22:45:13.984818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.02",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.02 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T22:45:13.984818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.02",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.02 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T22:45:13.984818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.02## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.02 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-10T22:45:13.984818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3506e5f6c2b9e1a99464470bae97c127741883d8 |
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-deepseek-67b-v15.2](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T23:02:54.951377](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.2/blob/main/results_2023-12-10T23-02-54.951377.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.714438762465758,
"acc_stderr": 0.02999619071511929,
"acc_norm": 0.7169995069840682,
"acc_norm_stderr": 0.030591759435653683,
"mc1": 0.3929008567931457,
"mc1_stderr": 0.017097248285233065,
"mc2": 0.5620230899021985,
"mc2_stderr": 0.014942776097379195
},
"harness|arc:challenge|25": {
"acc": 0.6424914675767918,
"acc_stderr": 0.014005494275916573,
"acc_norm": 0.6860068259385665,
"acc_norm_stderr": 0.0135626912247263
},
"harness|hellaswag|10": {
"acc": 0.6717785301732723,
"acc_stderr": 0.004686062421158153,
"acc_norm": 0.8636725751842262,
"acc_norm_stderr": 0.0034243464481037295
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.039725528847851375,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.039725528847851375
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7960526315789473,
"acc_stderr": 0.032790004063100515,
"acc_norm": 0.7960526315789473,
"acc_norm_stderr": 0.032790004063100515
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7660377358490567,
"acc_stderr": 0.02605529690115292,
"acc_norm": 0.7660377358490567,
"acc_norm_stderr": 0.02605529690115292
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8541666666666666,
"acc_stderr": 0.029514245964291762,
"acc_norm": 0.8541666666666666,
"acc_norm_stderr": 0.029514245964291762
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.03476599607516478,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.03476599607516478
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6808510638297872,
"acc_stderr": 0.03047297336338005,
"acc_norm": 0.6808510638297872,
"acc_norm_stderr": 0.03047297336338005
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.543859649122807,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.543859649122807,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6827586206896552,
"acc_stderr": 0.038783523721386215,
"acc_norm": 0.6827586206896552,
"acc_norm_stderr": 0.038783523721386215
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5264550264550265,
"acc_stderr": 0.02571523981134675,
"acc_norm": 0.5264550264550265,
"acc_norm_stderr": 0.02571523981134675
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8290322580645161,
"acc_stderr": 0.021417242936321582,
"acc_norm": 0.8290322580645161,
"acc_norm_stderr": 0.021417242936321582
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5911330049261084,
"acc_stderr": 0.03459058815883233,
"acc_norm": 0.5911330049261084,
"acc_norm_stderr": 0.03459058815883233
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8787878787878788,
"acc_stderr": 0.023253157951942088,
"acc_norm": 0.8787878787878788,
"acc_norm_stderr": 0.023253157951942088
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9637305699481865,
"acc_stderr": 0.013492659751295141,
"acc_norm": 0.9637305699481865,
"acc_norm_stderr": 0.013492659751295141
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.022421273612923714,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.022421273612923714
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.02889774874113114,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.02889774874113114
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8445378151260504,
"acc_stderr": 0.023536818625398904,
"acc_norm": 0.8445378151260504,
"acc_norm_stderr": 0.023536818625398904
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4966887417218543,
"acc_stderr": 0.04082393379449654,
"acc_norm": 0.4966887417218543,
"acc_norm_stderr": 0.04082393379449654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9045871559633027,
"acc_stderr": 0.012595899282335801,
"acc_norm": 0.9045871559633027,
"acc_norm_stderr": 0.012595899282335801
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.033247089118091176,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.033247089118091176
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9313725490196079,
"acc_stderr": 0.017744453647073322,
"acc_norm": 0.9313725490196079,
"acc_norm_stderr": 0.017744453647073322
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.018889750550956715,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.018889750550956715
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.02737309550054019,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.02737309550054019
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8347107438016529,
"acc_stderr": 0.03390780612972776,
"acc_norm": 0.8347107438016529,
"acc_norm_stderr": 0.03390780612972776
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.803680981595092,
"acc_stderr": 0.031207970394709225,
"acc_norm": 0.803680981595092,
"acc_norm_stderr": 0.031207970394709225
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5982142857142857,
"acc_stderr": 0.04653333146973647,
"acc_norm": 0.5982142857142857,
"acc_norm_stderr": 0.04653333146973647
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.03393295729761012,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.03393295729761012
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9316239316239316,
"acc_stderr": 0.016534627684311368,
"acc_norm": 0.9316239316239316,
"acc_norm_stderr": 0.016534627684311368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9042145593869731,
"acc_stderr": 0.010524031079055845,
"acc_norm": 0.9042145593869731,
"acc_norm_stderr": 0.010524031079055845
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7803468208092486,
"acc_stderr": 0.02228963885261789,
"acc_norm": 0.7803468208092486,
"acc_norm_stderr": 0.02228963885261789
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4670391061452514,
"acc_stderr": 0.016686126653013934,
"acc_norm": 0.4670391061452514,
"acc_norm_stderr": 0.016686126653013934
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7973856209150327,
"acc_stderr": 0.02301544687798568,
"acc_norm": 0.7973856209150327,
"acc_norm_stderr": 0.02301544687798568
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7909967845659164,
"acc_stderr": 0.02309314039837422,
"acc_norm": 0.7909967845659164,
"acc_norm_stderr": 0.02309314039837422
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.845679012345679,
"acc_stderr": 0.020100830999851,
"acc_norm": 0.845679012345679,
"acc_norm_stderr": 0.020100830999851
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5354609929078015,
"acc_stderr": 0.029752389657427054,
"acc_norm": 0.5354609929078015,
"acc_norm_stderr": 0.029752389657427054
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5573663624511083,
"acc_stderr": 0.012685906538206239,
"acc_norm": 0.5573663624511083,
"acc_norm_stderr": 0.012685906538206239
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.026799562024887664,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.026799562024887664
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.01681902837573638,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.01681902837573638
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7918367346938775,
"acc_stderr": 0.025991117672813296,
"acc_norm": 0.7918367346938775,
"acc_norm_stderr": 0.025991117672813296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8805970149253731,
"acc_stderr": 0.02292879327721974,
"acc_norm": 0.8805970149253731,
"acc_norm_stderr": 0.02292879327721974
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3929008567931457,
"mc1_stderr": 0.017097248285233065,
"mc2": 0.5620230899021985,
"mc2_stderr": 0.014942776097379195
},
"harness|winogrande|5": {
"acc": 0.8445146014206788,
"acc_stderr": 0.010184308214775778
},
"harness|gsm8k|5": {
"acc": 0.66868840030326,
"acc_stderr": 0.012964999679688687
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.2 | [
"region:us"
] | 2023-12-10T23:05:40+00:00 | {"pretty_name": "Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-deepseek-67b-v15.2](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-10T23:02:54.951377](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.2/blob/main/results_2023-12-10T23-02-54.951377.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.714438762465758,\n \"acc_stderr\": 0.02999619071511929,\n \"acc_norm\": 0.7169995069840682,\n \"acc_norm_stderr\": 0.030591759435653683,\n \"mc1\": 0.3929008567931457,\n \"mc1_stderr\": 0.017097248285233065,\n \"mc2\": 0.5620230899021985,\n \"mc2_stderr\": 0.014942776097379195\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6424914675767918,\n \"acc_stderr\": 0.014005494275916573,\n \"acc_norm\": 0.6860068259385665,\n \"acc_norm_stderr\": 0.0135626912247263\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6717785301732723,\n \"acc_stderr\": 0.004686062421158153,\n \"acc_norm\": 0.8636725751842262,\n \"acc_norm_stderr\": 0.0034243464481037295\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.6962962962962963,\n \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7960526315789473,\n \"acc_stderr\": 0.032790004063100515,\n \"acc_norm\": 0.7960526315789473,\n \"acc_norm_stderr\": 0.032790004063100515\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7660377358490567,\n \"acc_stderr\": 0.02605529690115292,\n \"acc_norm\": 0.7660377358490567,\n \"acc_norm_stderr\": 0.02605529690115292\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8541666666666666,\n \"acc_stderr\": 0.029514245964291762,\n \"acc_norm\": 0.8541666666666666,\n \"acc_norm_stderr\": 0.029514245964291762\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.03476599607516478,\n \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.03476599607516478\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6808510638297872,\n \"acc_stderr\": 0.03047297336338005,\n \"acc_norm\": 0.6808510638297872,\n \"acc_norm_stderr\": 0.03047297336338005\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.543859649122807,\n \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.543859649122807,\n \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6827586206896552,\n \"acc_stderr\": 0.038783523721386215,\n \"acc_norm\": 0.6827586206896552,\n \"acc_norm_stderr\": 0.038783523721386215\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.5264550264550265,\n \"acc_stderr\": 0.02571523981134675,\n \"acc_norm\": 0.5264550264550265,\n \"acc_norm_stderr\": 0.02571523981134675\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8290322580645161,\n \"acc_stderr\": 0.021417242936321582,\n \"acc_norm\": 0.8290322580645161,\n \"acc_norm_stderr\": 0.021417242936321582\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5911330049261084,\n \"acc_stderr\": 0.03459058815883233,\n \"acc_norm\": 0.5911330049261084,\n \"acc_norm_stderr\": 0.03459058815883233\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8787878787878788,\n \"acc_stderr\": 0.023253157951942088,\n \"acc_norm\": 0.8787878787878788,\n \"acc_norm_stderr\": 0.023253157951942088\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9637305699481865,\n \"acc_stderr\": 0.013492659751295141,\n \"acc_norm\": 0.9637305699481865,\n \"acc_norm_stderr\": 0.013492659751295141\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.022421273612923714,\n \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.022421273612923714\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113114,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113114\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8445378151260504,\n \"acc_stderr\": 0.023536818625398904,\n \"acc_norm\": 0.8445378151260504,\n \"acc_norm_stderr\": 0.023536818625398904\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9045871559633027,\n \"acc_stderr\": 0.012595899282335801,\n \"acc_norm\": 0.9045871559633027,\n \"acc_norm_stderr\": 0.012595899282335801\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.033247089118091176,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.033247089118091176\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073322,\n \"acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073322\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.018889750550956715,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.018889750550956715\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n \"acc_stderr\": 0.02737309550054019,\n \"acc_norm\": 0.7892376681614349,\n \"acc_norm_stderr\": 0.02737309550054019\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8347107438016529,\n \"acc_stderr\": 0.03390780612972776,\n \"acc_norm\": 0.8347107438016529,\n \"acc_norm_stderr\": 0.03390780612972776\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709225,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709225\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5982142857142857,\n \"acc_stderr\": 0.04653333146973647,\n \"acc_norm\": 0.5982142857142857,\n \"acc_norm_stderr\": 0.04653333146973647\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.03393295729761012,\n \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.03393295729761012\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n \"acc_stderr\": 0.016534627684311368,\n \"acc_norm\": 0.9316239316239316,\n \"acc_norm_stderr\": 0.016534627684311368\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9042145593869731,\n \"acc_stderr\": 0.010524031079055845,\n \"acc_norm\": 0.9042145593869731,\n \"acc_norm_stderr\": 0.010524031079055845\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7803468208092486,\n \"acc_stderr\": 0.02228963885261789,\n \"acc_norm\": 0.7803468208092486,\n \"acc_norm_stderr\": 0.02228963885261789\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4670391061452514,\n \"acc_stderr\": 0.016686126653013934,\n \"acc_norm\": 0.4670391061452514,\n \"acc_norm_stderr\": 0.016686126653013934\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7973856209150327,\n \"acc_stderr\": 0.02301544687798568,\n \"acc_norm\": 0.7973856209150327,\n \"acc_norm_stderr\": 0.02301544687798568\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n \"acc_stderr\": 0.02309314039837422,\n \"acc_norm\": 0.7909967845659164,\n \"acc_norm_stderr\": 0.02309314039837422\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.845679012345679,\n \"acc_stderr\": 0.020100830999851,\n \"acc_norm\": 0.845679012345679,\n \"acc_norm_stderr\": 0.020100830999851\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5354609929078015,\n \"acc_stderr\": 0.029752389657427054,\n \"acc_norm\": 0.5354609929078015,\n \"acc_norm_stderr\": 0.029752389657427054\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5573663624511083,\n \"acc_stderr\": 0.012685906538206239,\n \"acc_norm\": 0.5573663624511083,\n \"acc_norm_stderr\": 0.012685906538206239\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.026799562024887664,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.026799562024887664\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.01681902837573638,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.01681902837573638\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.025991117672813296,\n \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.025991117672813296\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3929008567931457,\n \"mc1_stderr\": 0.017097248285233065,\n \"mc2\": 0.5620230899021985,\n \"mc2_stderr\": 0.014942776097379195\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8445146014206788,\n \"acc_stderr\": 0.010184308214775778\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.66868840030326,\n \"acc_stderr\": 0.012964999679688687\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|arc:challenge|25_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|gsm8k|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hellaswag|10_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-10T23-02-54.951377.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["**/details_harness|winogrande|5_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-10T23-02-54.951377.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_10T23_02_54.951377", "path": ["results_2023-12-10T23-02-54.951377.parquet"]}, {"split": "latest", "path": ["results_2023-12-10T23-02-54.951377.parquet"]}]}]} | 2023-12-10T23:06:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-10T23:02:54.951377(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T23:02:54.951377(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-10T23:02:54.951377(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-deepseek-67b-v15.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-10T23:02:54.951377(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
88fe528822fbbf29ebf974867b5fd2b2c1da75c3 | This dataset was based off the https://huggingface.co/datasets/kmfoda/booksum dataset. It was made using GPT 3, inferencing it to create a multitude of stable-diffusion friendly prompts by passing in book chapters
from the booksum dataset. This can be utilized in large-language model fine-tuning or training to create a model that outputs stable diffusion prompts. | vahn9995/booksum-stable-diffusion-prompt | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:bsd-3-clause-clear",
"region:us"
] | 2023-12-10T23:27:55+00:00 | {"language": ["en"], "license": "bsd-3-clause-clear", "size_categories": ["1K<n<10K"], "task_categories": ["summarization"], "pretty_name": "Stable Diffusion Book Sum Prompts"} | 2023-12-11T00:22:27+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #size_categories-1K<n<10K #language-English #license-bsd-3-clause-clear #region-us
| This dataset was based off the URL dataset. It was made using GPT 3, inferencing it to create a multitude of stable-diffusion friendly prompts by passing in book chapters
from the booksum dataset. This can be utilized in large-language model fine-tuning or training to create a model that outputs stable diffusion prompts. | [] | [
"TAGS\n#task_categories-summarization #size_categories-1K<n<10K #language-English #license-bsd-3-clause-clear #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-summarization #size_categories-1K<n<10K #language-English #license-bsd-3-clause-clear #region-us \n"
] |
2da06aa204279963c3cdb36603d9773843835d0d |
# README.md
## Enhancing Large Language Model Performance in Digital Marketing Strategies with a Specialized Prompt Dataset
### Creator: Muratcan Koylan
---
### About the Dataset
This dataset, comprising 4,643 specialized prompts across various categories of digital marketing, aims to enhance the performance of Large Language Models (LLMs) like GPT-3 in generating accurate, relevant, and industry-specific marketing strategies.
30 Paid Search Prompts
15 ROAS Prompts
45 Facebook Ads Prompts
13 Google Remarketing Prompts
15 Ad Network Prompts
14 Linkedin Ads Promtps
14 Advertising Budget Prompts
16 Quality Score Prompts
14 BING Ads Prompts
15 Classified Advertising Prompts
20 CPM Prompts
15 X (Twitter) Prompts
15 CPC Prompts
15 PPC Prompts
15 Instagram Ads Prompts
15 Youtube Ads Prompts
15 Google Ads Prompts
15 Programmatic Advertising Prompts
15 Remarketing Promtps
15 CPV Prompts
15 Reach Promtps
15 CPL Prompts
15 Ad Rank Prompts
15 Interstitial Prompts
15 Ad Sense Prompts
15 SEM Prompts
20 Affiliates Prompts
15 Dsiplay Advertisement Promtps
20 Video Ads Promtps
20 Mobile Ads Prompts
20 TikTok Ads Promtps
20 Pinterest Ads Prompts
20 Shopping Ads Promtps
#### Dataset Composition:
- **StrategyDomain**: Main category representing the broader strategic area of digital marketing.
- **TacticScope**: Sub-category focusing on specific tactics within the StrategyDomain.
- **StrategicPrompt**: The actual marketing prompt text designed to simulate real-world marketing scenarios.
#### Methodology:
The dataset represents a synergistic fusion of human expertise and advanced AI technology, blending 30% human-generated content with 70% synthetic data crafted using cutting-edge generative AI models like GPT-4, Claude2, and LLama2. This approach strategically leverages the nuanced creativity and contextual understanding of human input, while exponentially expanding the dataset's breadth and depth through AI's vast generative capabilities. This methodology ensures the dataset embodies both the rich, detailed insights of human marketing experts and the diverse, innovative perspectives that AI models can offer.
#### Applications:
- **Fine-Tuning LLMs**: This dataset is pivotal for refining LLMs to produce more targeted, effective marketing strategies. By exposing LLMs to a diverse array of real-world marketing scenarios, they become adept at crafting nuanced and strategically sound solutions.
- **Marketing Campaign Development**: A valuable tool for marketers, this dataset aids in the ideation and development of comprehensive marketing campaigns, offering inspiration and strategic guidance.
- **Training AI Agents**: Ideal for training AI agents to autonomously handle various digital marketing tasks, this dataset can drive efficiency and innovation in marketing automation.
- **Cross-Domain Potential**: Beyond marketing, this dataset's structure and approach hold potential for adaptation and application in sectors like finance, healthcare, and education, where specialized language models can offer significant value.
---
### Experimental Results
Upon rigorous testing against standard LLM benchmarks, the dataset has demonstrated remarkable improvements in producing strategically relevant, creatively rich, and platform-specific accurate marketing content. These results underscore the dataset's efficacy in enhancing the contextual and strategic understanding of LLMs within the realm of digital marketing. Results will be shared in the near future with a proper paper.
---
### Future Directions
Looking ahead, the goal is to continuously evolve and enrich this dataset, incorporating emerging marketing trends and novel concepts. This ongoing development aims to broaden the dataset's utility, making it an indispensable tool for future LLM applications in digital marketing and beyond, including potential cross-disciplinary applications that push the boundaries of AI's role in various professional fields.
---
### Contact and Collaboration
As a fervent advocate for AI-driven innovation in marketing, I welcome collaboration and dialogue with fellow AI enthusiasts, marketers, and builders. My aim is to foster a community of like-minded professionals who are passionate about exploring the frontiers of AI in marketing. Reach out to me on X (@youraimarketer) for any collaboration ideas, discussions, or queries regarding this dataset.
---
### Acknowledgments
This dataset stands as a testament to the power of collaborative innovation, combining the best of human creativity and AI's transformative capabilities. A heartfelt thank you to all the contributors, including AI developers, data scientists, and marketing experts, whose collective efforts have brought this project to fruition.
--- | MuratcanKoylan/MarketingStructuralPrompts | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"marketing",
"prompting",
"template",
"region:us"
] | 2023-12-10T23:36:39+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "tags": ["marketing", "prompting", "template"]} | 2023-12-11T04:34:09+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #marketing #prompting #template #region-us
|
# URL
## Enhancing Large Language Model Performance in Digital Marketing Strategies with a Specialized Prompt Dataset
### Creator: Muratcan Koylan
---
### About the Dataset
This dataset, comprising 4,643 specialized prompts across various categories of digital marketing, aims to enhance the performance of Large Language Models (LLMs) like GPT-3 in generating accurate, relevant, and industry-specific marketing strategies.
30 Paid Search Prompts
15 ROAS Prompts
45 Facebook Ads Prompts
13 Google Remarketing Prompts
15 Ad Network Prompts
14 Linkedin Ads Promtps
14 Advertising Budget Prompts
16 Quality Score Prompts
14 BING Ads Prompts
15 Classified Advertising Prompts
20 CPM Prompts
15 X (Twitter) Prompts
15 CPC Prompts
15 PPC Prompts
15 Instagram Ads Prompts
15 Youtube Ads Prompts
15 Google Ads Prompts
15 Programmatic Advertising Prompts
15 Remarketing Promtps
15 CPV Prompts
15 Reach Promtps
15 CPL Prompts
15 Ad Rank Prompts
15 Interstitial Prompts
15 Ad Sense Prompts
15 SEM Prompts
20 Affiliates Prompts
15 Dsiplay Advertisement Promtps
20 Video Ads Promtps
20 Mobile Ads Prompts
20 TikTok Ads Promtps
20 Pinterest Ads Prompts
20 Shopping Ads Promtps
#### Dataset Composition:
- StrategyDomain: Main category representing the broader strategic area of digital marketing.
- TacticScope: Sub-category focusing on specific tactics within the StrategyDomain.
- StrategicPrompt: The actual marketing prompt text designed to simulate real-world marketing scenarios.
#### Methodology:
The dataset represents a synergistic fusion of human expertise and advanced AI technology, blending 30% human-generated content with 70% synthetic data crafted using cutting-edge generative AI models like GPT-4, Claude2, and LLama2. This approach strategically leverages the nuanced creativity and contextual understanding of human input, while exponentially expanding the dataset's breadth and depth through AI's vast generative capabilities. This methodology ensures the dataset embodies both the rich, detailed insights of human marketing experts and the diverse, innovative perspectives that AI models can offer.
#### Applications:
- Fine-Tuning LLMs: This dataset is pivotal for refining LLMs to produce more targeted, effective marketing strategies. By exposing LLMs to a diverse array of real-world marketing scenarios, they become adept at crafting nuanced and strategically sound solutions.
- Marketing Campaign Development: A valuable tool for marketers, this dataset aids in the ideation and development of comprehensive marketing campaigns, offering inspiration and strategic guidance.
- Training AI Agents: Ideal for training AI agents to autonomously handle various digital marketing tasks, this dataset can drive efficiency and innovation in marketing automation.
- Cross-Domain Potential: Beyond marketing, this dataset's structure and approach hold potential for adaptation and application in sectors like finance, healthcare, and education, where specialized language models can offer significant value.
---
### Experimental Results
Upon rigorous testing against standard LLM benchmarks, the dataset has demonstrated remarkable improvements in producing strategically relevant, creatively rich, and platform-specific accurate marketing content. These results underscore the dataset's efficacy in enhancing the contextual and strategic understanding of LLMs within the realm of digital marketing. Results will be shared in the near future with a proper paper.
---
### Future Directions
Looking ahead, the goal is to continuously evolve and enrich this dataset, incorporating emerging marketing trends and novel concepts. This ongoing development aims to broaden the dataset's utility, making it an indispensable tool for future LLM applications in digital marketing and beyond, including potential cross-disciplinary applications that push the boundaries of AI's role in various professional fields.
---
### Contact and Collaboration
As a fervent advocate for AI-driven innovation in marketing, I welcome collaboration and dialogue with fellow AI enthusiasts, marketers, and builders. My aim is to foster a community of like-minded professionals who are passionate about exploring the frontiers of AI in marketing. Reach out to me on X (@youraimarketer) for any collaboration ideas, discussions, or queries regarding this dataset.
---
### Acknowledgments
This dataset stands as a testament to the power of collaborative innovation, combining the best of human creativity and AI's transformative capabilities. A heartfelt thank you to all the contributors, including AI developers, data scientists, and marketing experts, whose collective efforts have brought this project to fruition.
--- | [
"# URL",
"## Enhancing Large Language Model Performance in Digital Marketing Strategies with a Specialized Prompt Dataset",
"### Creator: Muratcan Koylan\n\n---",
"### About the Dataset\n\nThis dataset, comprising 4,643 specialized prompts across various categories of digital marketing, aims to enhance the performance of Large Language Models (LLMs) like GPT-3 in generating accurate, relevant, and industry-specific marketing strategies.\n30 Paid Search Prompts\n15 ROAS Prompts\n45 Facebook Ads Prompts\n13 Google Remarketing Prompts\n15 Ad Network Prompts\n14 Linkedin Ads Promtps\n14 Advertising Budget Prompts\n16 Quality Score Prompts\n14 BING Ads Prompts\n15 Classified Advertising Prompts\n20 CPM Prompts\n15 X (Twitter) Prompts\n15 CPC Prompts\n15 PPC Prompts\n15 Instagram Ads Prompts\n15 Youtube Ads Prompts\n15 Google Ads Prompts\n15 Programmatic Advertising Prompts\n15 Remarketing Promtps\n15 CPV Prompts\n15 Reach Promtps\n15 CPL Prompts\n15 Ad Rank Prompts\n15 Interstitial Prompts\n15 Ad Sense Prompts\n15 SEM Prompts\n20 Affiliates Prompts\n15 Dsiplay Advertisement Promtps\n20 Video Ads Promtps\n20 Mobile Ads Prompts\n20 TikTok Ads Promtps\n20 Pinterest Ads Prompts\n20 Shopping Ads Promtps",
"#### Dataset Composition:\n- StrategyDomain: Main category representing the broader strategic area of digital marketing.\n- TacticScope: Sub-category focusing on specific tactics within the StrategyDomain.\n- StrategicPrompt: The actual marketing prompt text designed to simulate real-world marketing scenarios.",
"#### Methodology:\nThe dataset represents a synergistic fusion of human expertise and advanced AI technology, blending 30% human-generated content with 70% synthetic data crafted using cutting-edge generative AI models like GPT-4, Claude2, and LLama2. This approach strategically leverages the nuanced creativity and contextual understanding of human input, while exponentially expanding the dataset's breadth and depth through AI's vast generative capabilities. This methodology ensures the dataset embodies both the rich, detailed insights of human marketing experts and the diverse, innovative perspectives that AI models can offer.",
"#### Applications:\n- Fine-Tuning LLMs: This dataset is pivotal for refining LLMs to produce more targeted, effective marketing strategies. By exposing LLMs to a diverse array of real-world marketing scenarios, they become adept at crafting nuanced and strategically sound solutions.\n- Marketing Campaign Development: A valuable tool for marketers, this dataset aids in the ideation and development of comprehensive marketing campaigns, offering inspiration and strategic guidance.\n- Training AI Agents: Ideal for training AI agents to autonomously handle various digital marketing tasks, this dataset can drive efficiency and innovation in marketing automation.\n- Cross-Domain Potential: Beyond marketing, this dataset's structure and approach hold potential for adaptation and application in sectors like finance, healthcare, and education, where specialized language models can offer significant value.\n---",
"### Experimental Results\n\nUpon rigorous testing against standard LLM benchmarks, the dataset has demonstrated remarkable improvements in producing strategically relevant, creatively rich, and platform-specific accurate marketing content. These results underscore the dataset's efficacy in enhancing the contextual and strategic understanding of LLMs within the realm of digital marketing. Results will be shared in the near future with a proper paper.\n\n---",
"### Future Directions\n\nLooking ahead, the goal is to continuously evolve and enrich this dataset, incorporating emerging marketing trends and novel concepts. This ongoing development aims to broaden the dataset's utility, making it an indispensable tool for future LLM applications in digital marketing and beyond, including potential cross-disciplinary applications that push the boundaries of AI's role in various professional fields.\n\n\n---",
"### Contact and Collaboration\n\nAs a fervent advocate for AI-driven innovation in marketing, I welcome collaboration and dialogue with fellow AI enthusiasts, marketers, and builders. My aim is to foster a community of like-minded professionals who are passionate about exploring the frontiers of AI in marketing. Reach out to me on X (@youraimarketer) for any collaboration ideas, discussions, or queries regarding this dataset.\n\n---",
"### Acknowledgments\n\nThis dataset stands as a testament to the power of collaborative innovation, combining the best of human creativity and AI's transformative capabilities. A heartfelt thank you to all the contributors, including AI developers, data scientists, and marketing experts, whose collective efforts have brought this project to fruition.\n\n\n---"
] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #marketing #prompting #template #region-us \n",
"# URL",
"## Enhancing Large Language Model Performance in Digital Marketing Strategies with a Specialized Prompt Dataset",
"### Creator: Muratcan Koylan\n\n---",
"### About the Dataset\n\nThis dataset, comprising 4,643 specialized prompts across various categories of digital marketing, aims to enhance the performance of Large Language Models (LLMs) like GPT-3 in generating accurate, relevant, and industry-specific marketing strategies.\n30 Paid Search Prompts\n15 ROAS Prompts\n45 Facebook Ads Prompts\n13 Google Remarketing Prompts\n15 Ad Network Prompts\n14 Linkedin Ads Promtps\n14 Advertising Budget Prompts\n16 Quality Score Prompts\n14 BING Ads Prompts\n15 Classified Advertising Prompts\n20 CPM Prompts\n15 X (Twitter) Prompts\n15 CPC Prompts\n15 PPC Prompts\n15 Instagram Ads Prompts\n15 Youtube Ads Prompts\n15 Google Ads Prompts\n15 Programmatic Advertising Prompts\n15 Remarketing Promtps\n15 CPV Prompts\n15 Reach Promtps\n15 CPL Prompts\n15 Ad Rank Prompts\n15 Interstitial Prompts\n15 Ad Sense Prompts\n15 SEM Prompts\n20 Affiliates Prompts\n15 Dsiplay Advertisement Promtps\n20 Video Ads Promtps\n20 Mobile Ads Prompts\n20 TikTok Ads Promtps\n20 Pinterest Ads Prompts\n20 Shopping Ads Promtps",
"#### Dataset Composition:\n- StrategyDomain: Main category representing the broader strategic area of digital marketing.\n- TacticScope: Sub-category focusing on specific tactics within the StrategyDomain.\n- StrategicPrompt: The actual marketing prompt text designed to simulate real-world marketing scenarios.",
"#### Methodology:\nThe dataset represents a synergistic fusion of human expertise and advanced AI technology, blending 30% human-generated content with 70% synthetic data crafted using cutting-edge generative AI models like GPT-4, Claude2, and LLama2. This approach strategically leverages the nuanced creativity and contextual understanding of human input, while exponentially expanding the dataset's breadth and depth through AI's vast generative capabilities. This methodology ensures the dataset embodies both the rich, detailed insights of human marketing experts and the diverse, innovative perspectives that AI models can offer.",
"#### Applications:\n- Fine-Tuning LLMs: This dataset is pivotal for refining LLMs to produce more targeted, effective marketing strategies. By exposing LLMs to a diverse array of real-world marketing scenarios, they become adept at crafting nuanced and strategically sound solutions.\n- Marketing Campaign Development: A valuable tool for marketers, this dataset aids in the ideation and development of comprehensive marketing campaigns, offering inspiration and strategic guidance.\n- Training AI Agents: Ideal for training AI agents to autonomously handle various digital marketing tasks, this dataset can drive efficiency and innovation in marketing automation.\n- Cross-Domain Potential: Beyond marketing, this dataset's structure and approach hold potential for adaptation and application in sectors like finance, healthcare, and education, where specialized language models can offer significant value.\n---",
"### Experimental Results\n\nUpon rigorous testing against standard LLM benchmarks, the dataset has demonstrated remarkable improvements in producing strategically relevant, creatively rich, and platform-specific accurate marketing content. These results underscore the dataset's efficacy in enhancing the contextual and strategic understanding of LLMs within the realm of digital marketing. Results will be shared in the near future with a proper paper.\n\n---",
"### Future Directions\n\nLooking ahead, the goal is to continuously evolve and enrich this dataset, incorporating emerging marketing trends and novel concepts. This ongoing development aims to broaden the dataset's utility, making it an indispensable tool for future LLM applications in digital marketing and beyond, including potential cross-disciplinary applications that push the boundaries of AI's role in various professional fields.\n\n\n---",
"### Contact and Collaboration\n\nAs a fervent advocate for AI-driven innovation in marketing, I welcome collaboration and dialogue with fellow AI enthusiasts, marketers, and builders. My aim is to foster a community of like-minded professionals who are passionate about exploring the frontiers of AI in marketing. Reach out to me on X (@youraimarketer) for any collaboration ideas, discussions, or queries regarding this dataset.\n\n---",
"### Acknowledgments\n\nThis dataset stands as a testament to the power of collaborative innovation, combining the best of human creativity and AI's transformative capabilities. A heartfelt thank you to all the contributors, including AI developers, data scientists, and marketing experts, whose collective efforts have brought this project to fruition.\n\n\n---"
] | [
47,
2,
22,
10,
293,
75,
144,
191,
96,
90,
101,
78
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #marketing #prompting #template #region-us \n# URL## Enhancing Large Language Model Performance in Digital Marketing Strategies with a Specialized Prompt Dataset### Creator: Muratcan Koylan\n\n---### About the Dataset\n\nThis dataset, comprising 4,643 specialized prompts across various categories of digital marketing, aims to enhance the performance of Large Language Models (LLMs) like GPT-3 in generating accurate, relevant, and industry-specific marketing strategies.\n30 Paid Search Prompts\n15 ROAS Prompts\n45 Facebook Ads Prompts\n13 Google Remarketing Prompts\n15 Ad Network Prompts\n14 Linkedin Ads Promtps\n14 Advertising Budget Prompts\n16 Quality Score Prompts\n14 BING Ads Prompts\n15 Classified Advertising Prompts\n20 CPM Prompts\n15 X (Twitter) Prompts\n15 CPC Prompts\n15 PPC Prompts\n15 Instagram Ads Prompts\n15 Youtube Ads Prompts\n15 Google Ads Prompts\n15 Programmatic Advertising Prompts\n15 Remarketing Promtps\n15 CPV Prompts\n15 Reach Promtps\n15 CPL Prompts\n15 Ad Rank Prompts\n15 Interstitial Prompts\n15 Ad Sense Prompts\n15 SEM Prompts\n20 Affiliates Prompts\n15 Dsiplay Advertisement Promtps\n20 Video Ads Promtps\n20 Mobile Ads Prompts\n20 TikTok Ads Promtps\n20 Pinterest Ads Prompts\n20 Shopping Ads Promtps#### Dataset Composition:\n- StrategyDomain: Main category representing the broader strategic area of digital marketing.\n- TacticScope: Sub-category focusing on specific tactics within the StrategyDomain.\n- StrategicPrompt: The actual marketing prompt text designed to simulate real-world marketing scenarios.",
"passage: #### Methodology:\nThe dataset represents a synergistic fusion of human expertise and advanced AI technology, blending 30% human-generated content with 70% synthetic data crafted using cutting-edge generative AI models like GPT-4, Claude2, and LLama2. This approach strategically leverages the nuanced creativity and contextual understanding of human input, while exponentially expanding the dataset's breadth and depth through AI's vast generative capabilities. This methodology ensures the dataset embodies both the rich, detailed insights of human marketing experts and the diverse, innovative perspectives that AI models can offer.#### Applications:\n- Fine-Tuning LLMs: This dataset is pivotal for refining LLMs to produce more targeted, effective marketing strategies. By exposing LLMs to a diverse array of real-world marketing scenarios, they become adept at crafting nuanced and strategically sound solutions.\n- Marketing Campaign Development: A valuable tool for marketers, this dataset aids in the ideation and development of comprehensive marketing campaigns, offering inspiration and strategic guidance.\n- Training AI Agents: Ideal for training AI agents to autonomously handle various digital marketing tasks, this dataset can drive efficiency and innovation in marketing automation.\n- Cross-Domain Potential: Beyond marketing, this dataset's structure and approach hold potential for adaptation and application in sectors like finance, healthcare, and education, where specialized language models can offer significant value.\n---### Experimental Results\n\nUpon rigorous testing against standard LLM benchmarks, the dataset has demonstrated remarkable improvements in producing strategically relevant, creatively rich, and platform-specific accurate marketing content. These results underscore the dataset's efficacy in enhancing the contextual and strategic understanding of LLMs within the realm of digital marketing. Results will be shared in the near future with a proper paper.\n\n---### Future Directions\n\nLooking ahead, the goal is to continuously evolve and enrich this dataset, incorporating emerging marketing trends and novel concepts. This ongoing development aims to broaden the dataset's utility, making it an indispensable tool for future LLM applications in digital marketing and beyond, including potential cross-disciplinary applications that push the boundaries of AI's role in various professional fields.\n\n\n---### Contact and Collaboration\n\nAs a fervent advocate for AI-driven innovation in marketing, I welcome collaboration and dialogue with fellow AI enthusiasts, marketers, and builders. My aim is to foster a community of like-minded professionals who are passionate about exploring the frontiers of AI in marketing. Reach out to me on X (@youraimarketer) for any collaboration ideas, discussions, or queries regarding this dataset.\n\n---"
] |
84fd1f48c87b726555e42c038adfdbb3170c5e3b | # Dataset Card for "ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bjoernp/ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered | [
"region:us"
] | 2023-12-10T23:55:28+00:00 | {"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 178488809.0, "num_examples": 75197}], "download_size": 70386691, "dataset_size": 178488809.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-10T23:55:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered"
More Information needed | [
"# Dataset Card for \"ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ise-uiuc_Magicoder-OSS-Instruct-75K-sharegpt-filtered\"\n\nMore Information needed"
] |
3012cfff31f2ec092bfc2ae6393f1168a41159e6 | # Dataset Card for "Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bjoernp/Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered | [
"region:us"
] | 2023-12-10T23:56:29+00:00 | {"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 48112862.0, "num_examples": 22608}], "download_size": 19769351, "dataset_size": 48112862.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-10T23:56:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered"
More Information needed | [
"# Dataset Card for \"Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Vezora_Tested-22k-Python-Alpaca-sharegpt-filtered\"\n\nMore Information needed"
] |
e163ef27777d36d0fcbe3011ca98f24f3ad1eb11 |
# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheBloke/neural-chat-7B-v3-2-GPTQ](https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T00:12:21.907526](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ/blob/main/results_2023-12-11T00-12-21.907526.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6058481456466821,
"acc_stderr": 0.03323160720607251,
"acc_norm": 0.6077924426433228,
"acc_norm_stderr": 0.033909992378155715,
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.5979099902582387,
"mc2_stderr": 0.01509977856693472
},
"harness|arc:challenge|25": {
"acc": 0.6296928327645052,
"acc_stderr": 0.01411129875167495,
"acc_norm": 0.659556313993174,
"acc_norm_stderr": 0.013847460518892978
},
"harness|hellaswag|10": {
"acc": 0.6360286795459071,
"acc_stderr": 0.004801572028920794,
"acc_norm": 0.8324039036048596,
"acc_norm_stderr": 0.003727438786513393
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5407407407407407,
"acc_stderr": 0.04304979692464241,
"acc_norm": 0.5407407407407407,
"acc_norm_stderr": 0.04304979692464241
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926605,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926605
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779833,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779833
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.03772446857518027,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.03772446857518027
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.49019607843137253,
"acc_stderr": 0.04974229460422817,
"acc_norm": 0.49019607843137253,
"acc_norm_stderr": 0.04974229460422817
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.03267151848924777,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.03267151848924777
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594964,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594964
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3544973544973545,
"acc_stderr": 0.024636830602842,
"acc_norm": 0.3544973544973545,
"acc_norm_stderr": 0.024636830602842
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.02489246917246283,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.02489246917246283
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.0303137105381989,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.0303137105381989
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5948717948717949,
"acc_stderr": 0.024890471769938145,
"acc_norm": 0.5948717948717949,
"acc_norm_stderr": 0.024890471769938145
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.027840811495871923,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.027840811495871923
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135356,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135356
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7963302752293578,
"acc_stderr": 0.017266742087630783,
"acc_norm": 0.7963302752293578,
"acc_norm_stderr": 0.017266742087630783
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4861111111111111,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.4861111111111111,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467617,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467617
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.04414343666854934,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.04414343666854934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6319018404907976,
"acc_stderr": 0.03789213935838396,
"acc_norm": 0.6319018404907976,
"acc_norm_stderr": 0.03789213935838396
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.02514093595033544,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.02514093595033544
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7841634738186463,
"acc_stderr": 0.014711684386139953,
"acc_norm": 0.7841634738186463,
"acc_norm_stderr": 0.014711684386139953
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.02590663263101613,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.02590663263101613
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39106145251396646,
"acc_stderr": 0.016320763763808383,
"acc_norm": 0.39106145251396646,
"acc_norm_stderr": 0.016320763763808383
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.026643278474508755,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.026643278474508755
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.02610567386140983,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.02610567386140983
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.42907801418439717,
"acc_stderr": 0.029525914302558555,
"acc_norm": 0.42907801418439717,
"acc_norm_stderr": 0.029525914302558555
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4165580182529335,
"acc_stderr": 0.012591153245057392,
"acc_norm": 0.4165580182529335,
"acc_norm_stderr": 0.012591153245057392
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5625,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.01978046595477751,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.01978046595477751
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.029162738410249772,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.029162738410249772
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.03218093795602357,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.03218093795602357
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4541003671970624,
"mc1_stderr": 0.017429593091323522,
"mc2": 0.5979099902582387,
"mc2_stderr": 0.01509977856693472
},
"harness|winogrande|5": {
"acc": 0.7947908445146015,
"acc_stderr": 0.01135031570746206
},
"harness|gsm8k|5": {
"acc": 0.5284306292645944,
"acc_stderr": 0.013750202076584419
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ | [
"region:us"
] | 2023-12-11T00:15:15+00:00 | {"pretty_name": "Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheBloke/neural-chat-7B-v3-2-GPTQ](https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T00:12:21.907526](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__neural-chat-7B-v3-2-GPTQ/blob/main/results_2023-12-11T00-12-21.907526.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6058481456466821,\n \"acc_stderr\": 0.03323160720607251,\n \"acc_norm\": 0.6077924426433228,\n \"acc_norm_stderr\": 0.033909992378155715,\n \"mc1\": 0.4541003671970624,\n \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.5979099902582387,\n \"mc2_stderr\": 0.01509977856693472\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6296928327645052,\n \"acc_stderr\": 0.01411129875167495,\n \"acc_norm\": 0.659556313993174,\n \"acc_norm_stderr\": 0.013847460518892978\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6360286795459071,\n \"acc_stderr\": 0.004801572028920794,\n \"acc_norm\": 0.8324039036048596,\n \"acc_norm_stderr\": 0.003727438786513393\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n \"acc_stderr\": 0.04304979692464241,\n \"acc_norm\": 0.5407407407407407,\n \"acc_norm_stderr\": 0.04304979692464241\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779833,\n \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779833\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n \"acc_stderr\": 0.03772446857518027,\n \"acc_norm\": 0.5722543352601156,\n \"acc_norm_stderr\": 0.03772446857518027\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.49019607843137253,\n \"acc_stderr\": 0.04974229460422817,\n \"acc_norm\": 0.49019607843137253,\n \"acc_norm_stderr\": 0.04974229460422817\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594964,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594964\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3544973544973545,\n \"acc_stderr\": 0.024636830602842,\n \"acc_norm\": 0.3544973544973545,\n \"acc_norm_stderr\": 0.024636830602842\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n \"acc_stderr\": 0.02489246917246283,\n \"acc_norm\": 0.7419354838709677,\n \"acc_norm_stderr\": 0.02489246917246283\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175007,\n \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175007\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5948717948717949,\n \"acc_stderr\": 0.024890471769938145,\n \"acc_norm\": 0.5948717948717949,\n \"acc_norm_stderr\": 0.024890471769938145\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.027840811495871923,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.027840811495871923\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135356,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135356\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7963302752293578,\n \"acc_stderr\": 0.017266742087630783,\n \"acc_norm\": 0.7963302752293578,\n \"acc_norm_stderr\": 0.017266742087630783\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467617,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467617\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.04414343666854934,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.04414343666854934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6319018404907976,\n \"acc_stderr\": 0.03789213935838396,\n \"acc_norm\": 0.6319018404907976,\n \"acc_norm_stderr\": 0.03789213935838396\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.02514093595033544,\n \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.02514093595033544\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7841634738186463,\n \"acc_stderr\": 0.014711684386139953,\n \"acc_norm\": 0.7841634738186463,\n \"acc_norm_stderr\": 0.014711684386139953\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.02590663263101613,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.02590663263101613\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39106145251396646,\n \"acc_stderr\": 0.016320763763808383,\n \"acc_norm\": 0.39106145251396646,\n \"acc_norm_stderr\": 0.016320763763808383\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.026643278474508755,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.026643278474508755\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.02610567386140983,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.02610567386140983\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.42907801418439717,\n \"acc_stderr\": 0.029525914302558555,\n \"acc_norm\": 0.42907801418439717,\n \"acc_norm_stderr\": 0.029525914302558555\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4165580182529335,\n \"acc_stderr\": 0.012591153245057392,\n \"acc_norm\": 0.4165580182529335,\n \"acc_norm_stderr\": 0.012591153245057392\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.030134614954403924,\n \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.030134614954403924\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.01978046595477751,\n \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.01978046595477751\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249772,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249772\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.03218093795602357,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.03218093795602357\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4541003671970624,\n \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.5979099902582387,\n \"mc2_stderr\": 0.01509977856693472\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7947908445146015,\n \"acc_stderr\": 0.01135031570746206\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5284306292645944,\n \"acc_stderr\": 0.013750202076584419\n }\n}\n```", "repo_url": "https://huggingface.co/TheBloke/neural-chat-7B-v3-2-GPTQ", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|arc:challenge|25_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|gsm8k|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hellaswag|10_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["**/details_harness|winogrande|5_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T00-12-21.907526.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T00_12_21.907526", "path": ["results_2023-12-11T00-12-21.907526.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T00-12-21.907526.parquet"]}]}]} | 2023-12-11T00:15:56+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheBloke/neural-chat-7B-v3-2-GPTQ on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T00:12:21.907526(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/neural-chat-7B-v3-2-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T00:12:21.907526(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/neural-chat-7B-v3-2-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T00:12:21.907526(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheBloke/neural-chat-7B-v3-2-GPTQ## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheBloke/neural-chat-7B-v3-2-GPTQ on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T00:12:21.907526(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e611c98856c2b8c2a5459df9bc0c15492837646a |
# Dataset Card for Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/leejunhyeok/MoMo-70B-LoRA-V1.2_1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [leejunhyeok/MoMo-70B-LoRA-V1.2_1](https://huggingface.co/leejunhyeok/MoMo-70B-LoRA-V1.2_1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_leejunhyeok__MoMo-70B-LoRA-V1.2_1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T00:49:27.690522](https://huggingface.co/datasets/open-llm-leaderboard/details_leejunhyeok__MoMo-70B-LoRA-V1.2_1/blob/main/results_2023-12-11T00-49-27.690522.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6982187092414253,
"acc_stderr": 0.03042209430227255,
"acc_norm": 0.7019460256253881,
"acc_norm_stderr": 0.03101566845962337,
"mc1": 0.4394124847001224,
"mc1_stderr": 0.01737452048251371,
"mc2": 0.6140893594618404,
"mc2_stderr": 0.014827391515273696
},
"harness|arc:challenge|25": {
"acc": 0.6715017064846417,
"acc_stderr": 0.0137249784655373,
"acc_norm": 0.7064846416382252,
"acc_norm_stderr": 0.013307250444941115
},
"harness|hellaswag|10": {
"acc": 0.6772555267874926,
"acc_stderr": 0.004665704208339046,
"acc_norm": 0.8639713204540929,
"acc_norm_stderr": 0.0034211839093201708
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.04171654161354543,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.04171654161354543
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8157894736842105,
"acc_stderr": 0.0315469804508223,
"acc_norm": 0.8157894736842105,
"acc_norm_stderr": 0.0315469804508223
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544064,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544064
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.03116489966694862,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.03116489966694862
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.030783736757745653,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.030783736757745653
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947558,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947558
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.025591857761382182,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.025591857761382182
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.02173254068932929,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.02173254068932929
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8303030303030303,
"acc_stderr": 0.029311188674983137,
"acc_norm": 0.8303030303030303,
"acc_norm_stderr": 0.029311188674983137
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.022390787638216773,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.022390787638216773
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.927461139896373,
"acc_stderr": 0.018718998520678178,
"acc_norm": 0.927461139896373,
"acc_norm_stderr": 0.018718998520678178
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6948717948717948,
"acc_stderr": 0.023346335293325884,
"acc_norm": 0.6948717948717948,
"acc_norm_stderr": 0.023346335293325884
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028597,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028597
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.02684151432295894,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.02684151432295894
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.45695364238410596,
"acc_stderr": 0.04067325174247443,
"acc_norm": 0.45695364238410596,
"acc_norm_stderr": 0.04067325174247443
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.01280978008187893,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.01280978008187893
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6064814814814815,
"acc_stderr": 0.03331747876370312,
"acc_norm": 0.6064814814814815,
"acc_norm_stderr": 0.03331747876370312
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969427,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969427
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8945147679324894,
"acc_stderr": 0.01999556072375854,
"acc_norm": 0.8945147679324894,
"acc_norm_stderr": 0.01999556072375854
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.027373095500540193,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.027373095500540193
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.030884661089515375,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.030884661089515375
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035196,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035196
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.03602814176392645,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.03602814176392645
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8220858895705522,
"acc_stderr": 0.03004735765580663,
"acc_norm": 0.8220858895705522,
"acc_norm_stderr": 0.03004735765580663
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.0376017800602662,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.0376017800602662
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7832369942196532,
"acc_stderr": 0.022183477668412856,
"acc_norm": 0.7832369942196532,
"acc_norm_stderr": 0.022183477668412856
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6100558659217877,
"acc_stderr": 0.016312376629213067,
"acc_norm": 0.6100558659217877,
"acc_norm_stderr": 0.016312376629213067
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7556270096463023,
"acc_stderr": 0.024406162094668886,
"acc_norm": 0.7556270096463023,
"acc_norm_stderr": 0.024406162094668886
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.02118589361522518,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.02118589361522518
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.02965823509766691,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.02965823509766691
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.559973924380704,
"acc_stderr": 0.012678037478574513,
"acc_norm": 0.559973924380704,
"acc_norm_stderr": 0.012678037478574513
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7389705882352942,
"acc_stderr": 0.026679252270103142,
"acc_norm": 0.7389705882352942,
"acc_norm_stderr": 0.026679252270103142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7630718954248366,
"acc_stderr": 0.017201662169789772,
"acc_norm": 0.7630718954248366,
"acc_norm_stderr": 0.017201662169789772
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04265792110940589,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04265792110940589
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8081632653061225,
"acc_stderr": 0.025206963154225392,
"acc_norm": 0.8081632653061225,
"acc_norm_stderr": 0.025206963154225392
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.024112678240900798,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.024112678240900798
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.03882310850890594,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.03882310850890594
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4394124847001224,
"mc1_stderr": 0.01737452048251371,
"mc2": 0.6140893594618404,
"mc2_stderr": 0.014827391515273696
},
"harness|winogrande|5": {
"acc": 0.8318863456985004,
"acc_stderr": 0.010510336954166723
},
"harness|gsm8k|5": {
"acc": 0.5663381349507203,
"acc_stderr": 0.013650728047064688
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_leejunhyeok__MoMo-70B-LoRA-V1.2_1 | [
"region:us"
] | 2023-12-11T00:52:28+00:00 | {"pretty_name": "Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1", "dataset_summary": "Dataset automatically created during the evaluation run of model [leejunhyeok/MoMo-70B-LoRA-V1.2_1](https://huggingface.co/leejunhyeok/MoMo-70B-LoRA-V1.2_1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_leejunhyeok__MoMo-70B-LoRA-V1.2_1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T00:49:27.690522](https://huggingface.co/datasets/open-llm-leaderboard/details_leejunhyeok__MoMo-70B-LoRA-V1.2_1/blob/main/results_2023-12-11T00-49-27.690522.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6982187092414253,\n \"acc_stderr\": 0.03042209430227255,\n \"acc_norm\": 0.7019460256253881,\n \"acc_norm_stderr\": 0.03101566845962337,\n \"mc1\": 0.4394124847001224,\n \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6140893594618404,\n \"mc2_stderr\": 0.014827391515273696\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6715017064846417,\n \"acc_stderr\": 0.0137249784655373,\n \"acc_norm\": 0.7064846416382252,\n \"acc_norm_stderr\": 0.013307250444941115\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6772555267874926,\n \"acc_stderr\": 0.004665704208339046,\n \"acc_norm\": 0.8639713204540929,\n \"acc_norm_stderr\": 0.0034211839093201708\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.04171654161354543,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.04171654161354543\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8157894736842105,\n \"acc_stderr\": 0.0315469804508223,\n \"acc_norm\": 0.8157894736842105,\n \"acc_norm_stderr\": 0.0315469804508223\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544064,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544064\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.03116489966694862,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.03116489966694862\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.030783736757745653,\n \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.030783736757745653\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947558,\n \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947558\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.025591857761382182,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.025591857761382182\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.02173254068932929,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.02173254068932929\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.029311188674983137,\n \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.029311188674983137\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.022390787638216773,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.022390787638216773\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.927461139896373,\n \"acc_stderr\": 0.018718998520678178,\n \"acc_norm\": 0.927461139896373,\n \"acc_norm_stderr\": 0.018718998520678178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6948717948717948,\n \"acc_stderr\": 0.023346335293325884,\n \"acc_norm\": 0.6948717948717948,\n \"acc_norm_stderr\": 0.023346335293325884\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028597,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028597\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.02684151432295894,\n \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.02684151432295894\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.45695364238410596,\n \"acc_stderr\": 0.04067325174247443,\n \"acc_norm\": 0.45695364238410596,\n \"acc_norm_stderr\": 0.04067325174247443\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9009174311926605,\n \"acc_stderr\": 0.01280978008187893,\n \"acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.01280978008187893\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9068627450980392,\n \"acc_stderr\": 0.020397853969427,\n \"acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969427\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8945147679324894,\n \"acc_stderr\": 0.01999556072375854,\n \"acc_norm\": 0.8945147679324894,\n \"acc_norm_stderr\": 0.01999556072375854\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n \"acc_stderr\": 0.027373095500540193,\n \"acc_norm\": 0.7892376681614349,\n \"acc_norm_stderr\": 0.027373095500540193\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515375,\n \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515375\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035196,\n \"acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035196\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8220858895705522,\n \"acc_stderr\": 0.03004735765580663,\n \"acc_norm\": 0.8220858895705522,\n \"acc_norm_stderr\": 0.03004735765580663\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.0376017800602662,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.0376017800602662\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n \"acc_stderr\": 0.012036729568216054,\n \"acc_norm\": 0.8697318007662835,\n \"acc_norm_stderr\": 0.012036729568216054\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7832369942196532,\n \"acc_stderr\": 0.022183477668412856,\n \"acc_norm\": 0.7832369942196532,\n \"acc_norm_stderr\": 0.022183477668412856\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6100558659217877,\n \"acc_stderr\": 0.016312376629213067,\n \"acc_norm\": 0.6100558659217877,\n \"acc_norm_stderr\": 0.016312376629213067\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7556270096463023,\n \"acc_stderr\": 0.024406162094668886,\n \"acc_norm\": 0.7556270096463023,\n \"acc_norm_stderr\": 0.024406162094668886\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.02118589361522518,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.02118589361522518\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.02965823509766691,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.02965823509766691\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.559973924380704,\n \"acc_stderr\": 0.012678037478574513,\n \"acc_norm\": 0.559973924380704,\n \"acc_norm_stderr\": 0.012678037478574513\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.026679252270103142,\n \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.026679252270103142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7630718954248366,\n \"acc_stderr\": 0.017201662169789772,\n \"acc_norm\": 0.7630718954248366,\n \"acc_norm_stderr\": 0.017201662169789772\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.04265792110940589,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04265792110940589\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8081632653061225,\n \"acc_stderr\": 0.025206963154225392,\n \"acc_norm\": 0.8081632653061225,\n \"acc_norm_stderr\": 0.025206963154225392\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.024112678240900798,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.024112678240900798\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4394124847001224,\n \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6140893594618404,\n \"mc2_stderr\": 0.014827391515273696\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8318863456985004,\n \"acc_stderr\": 0.010510336954166723\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5663381349507203,\n \"acc_stderr\": 0.013650728047064688\n }\n}\n```", "repo_url": "https://huggingface.co/leejunhyeok/MoMo-70B-LoRA-V1.2_1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|arc:challenge|25_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|gsm8k|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hellaswag|10_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T00-49-27.690522.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["**/details_harness|winogrande|5_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T00-49-27.690522.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T00_49_27.690522", "path": ["results_2023-12-11T00-49-27.690522.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T00-49-27.690522.parquet"]}]}]} | 2023-12-11T00:53:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model leejunhyeok/MoMo-70B-LoRA-V1.2_1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T00:49:27.690522(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model leejunhyeok/MoMo-70B-LoRA-V1.2_1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T00:49:27.690522(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model leejunhyeok/MoMo-70B-LoRA-V1.2_1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T00:49:27.690522(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of leejunhyeok/MoMo-70B-LoRA-V1.2_1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model leejunhyeok/MoMo-70B-LoRA-V1.2_1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T00:49:27.690522(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
51057588175a96782a9922e37cbabca5aaa11175 |
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.03](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:08:10.868540](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03/blob/main/results_2023-12-11T01-08-10.868540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6311240267277043,
"acc_stderr": 0.032483219688012585,
"acc_norm": 0.6342658628547596,
"acc_norm_stderr": 0.033138776879109225,
"mc1": 0.33659730722154224,
"mc1_stderr": 0.016542412809494887,
"mc2": 0.48533561495565075,
"mc2_stderr": 0.015259691745766833
},
"harness|arc:challenge|25": {
"acc": 0.5981228668941979,
"acc_stderr": 0.014327268614578274,
"acc_norm": 0.6382252559726962,
"acc_norm_stderr": 0.014041957945038076
},
"harness|hellaswag|10": {
"acc": 0.6450906193985262,
"acc_stderr": 0.004775079636567097,
"acc_norm": 0.8473411670981876,
"acc_norm_stderr": 0.0035892328893065146
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353227,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353227
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.038424985593952694,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.038424985593952694
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493857,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493857
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.037161774375660164,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.037161774375660164
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416907,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416907
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.8,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.02544636563440678,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.02544636563440678
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782658,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782658
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5320197044334976,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.5320197044334976,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.02381447708659355,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.02381447708659355
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6256410256410256,
"acc_stderr": 0.024537591572830503,
"acc_norm": 0.6256410256410256,
"acc_norm_stderr": 0.024537591572830503
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.41721854304635764,
"acc_stderr": 0.040261414976346104,
"acc_norm": 0.41721854304635764,
"acc_norm_stderr": 0.040261414976346104
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374294,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374294
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639318,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944863,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944863
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615624,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615624
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8418803418803419,
"acc_stderr": 0.023902325549560417,
"acc_norm": 0.8418803418803419,
"acc_norm_stderr": 0.023902325549560417
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.013740797258579828,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.013740797258579828
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.0246853168672578,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.0246853168672578
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3452513966480447,
"acc_stderr": 0.01590143260893036,
"acc_norm": 0.3452513966480447,
"acc_norm_stderr": 0.01590143260893036
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7026143790849673,
"acc_stderr": 0.02617390850671858,
"acc_norm": 0.7026143790849673,
"acc_norm_stderr": 0.02617390850671858
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.02492200116888633,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.02492200116888633
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.44680851063829785,
"acc_stderr": 0.029658235097666907,
"acc_norm": 0.44680851063829785,
"acc_norm_stderr": 0.029658235097666907
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46479791395045633,
"acc_stderr": 0.012738547371303957,
"acc_norm": 0.46479791395045633,
"acc_norm_stderr": 0.012738547371303957
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6433823529411765,
"acc_stderr": 0.02909720956841195,
"acc_norm": 0.6433823529411765,
"acc_norm_stderr": 0.02909720956841195
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.01943177567703731,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.01943177567703731
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.673469387755102,
"acc_stderr": 0.030021056238440303,
"acc_norm": 0.673469387755102,
"acc_norm_stderr": 0.030021056238440303
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33659730722154224,
"mc1_stderr": 0.016542412809494887,
"mc2": 0.48533561495565075,
"mc2_stderr": 0.015259691745766833
},
"harness|winogrande|5": {
"acc": 0.8089976322020521,
"acc_stderr": 0.011047808761510432
},
"harness|gsm8k|5": {
"acc": 0.510235026535254,
"acc_stderr": 0.013769598923012388
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03 | [
"region:us"
] | 2023-12-11T01:11:01+00:00 | {"pretty_name": "Evaluation run of perlthoughts/Chupacabra-7B-v2.03", "dataset_summary": "Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.03](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T01:08:10.868540](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03/blob/main/results_2023-12-11T01-08-10.868540.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6311240267277043,\n \"acc_stderr\": 0.032483219688012585,\n \"acc_norm\": 0.6342658628547596,\n \"acc_norm_stderr\": 0.033138776879109225,\n \"mc1\": 0.33659730722154224,\n \"mc1_stderr\": 0.016542412809494887,\n \"mc2\": 0.48533561495565075,\n \"mc2_stderr\": 0.015259691745766833\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5981228668941979,\n \"acc_stderr\": 0.014327268614578274,\n \"acc_norm\": 0.6382252559726962,\n \"acc_norm_stderr\": 0.014041957945038076\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6450906193985262,\n \"acc_stderr\": 0.004775079636567097,\n \"acc_norm\": 0.8473411670981876,\n \"acc_norm_stderr\": 0.0035892328893065146\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353227,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353227\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.038424985593952694,\n \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.038424985593952694\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493857,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493857\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.037161774375660164,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.037161774375660164\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42328042328042326,\n \"acc_stderr\": 0.02544636563440678,\n \"acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.02544636563440678\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782658,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782658\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.02381447708659355,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.02381447708659355\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6256410256410256,\n \"acc_stderr\": 0.024537591572830503,\n \"acc_norm\": 0.6256410256410256,\n \"acc_norm_stderr\": 0.024537591572830503\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.41721854304635764,\n \"acc_stderr\": 0.040261414976346104,\n \"acc_norm\": 0.41721854304635764,\n \"acc_norm_stderr\": 0.040261414976346104\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374294,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374294\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944863,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944863\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615624,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615624\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8418803418803419,\n \"acc_stderr\": 0.023902325549560417,\n \"acc_norm\": 0.8418803418803419,\n \"acc_norm_stderr\": 0.023902325549560417\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n \"acc_stderr\": 0.013740797258579828,\n \"acc_norm\": 0.8199233716475096,\n \"acc_norm_stderr\": 0.013740797258579828\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.0246853168672578,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.0246853168672578\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3452513966480447,\n \"acc_stderr\": 0.01590143260893036,\n \"acc_norm\": 0.3452513966480447,\n \"acc_norm_stderr\": 0.01590143260893036\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7026143790849673,\n \"acc_stderr\": 0.02617390850671858,\n \"acc_norm\": 0.7026143790849673,\n \"acc_norm_stderr\": 0.02617390850671858\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.02492200116888633,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.02492200116888633\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6433823529411765,\n \"acc_stderr\": 0.02909720956841195,\n \"acc_norm\": 0.6433823529411765,\n \"acc_norm_stderr\": 0.02909720956841195\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.01943177567703731,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.01943177567703731\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.673469387755102,\n \"acc_stderr\": 0.030021056238440303,\n \"acc_norm\": 0.673469387755102,\n \"acc_norm_stderr\": 0.030021056238440303\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33659730722154224,\n \"mc1_stderr\": 0.016542412809494887,\n \"mc2\": 0.48533561495565075,\n \"mc2_stderr\": 0.015259691745766833\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8089976322020521,\n \"acc_stderr\": 0.011047808761510432\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.510235026535254,\n \"acc_stderr\": 0.013769598923012388\n }\n}\n```", "repo_url": "https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-08-10.868540.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["**/details_harness|winogrande|5_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T01-08-10.868540.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T01_08_10.868540", "path": ["results_2023-12-11T01-08-10.868540.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T01-08-10.868540.parquet"]}]}]} | 2023-12-11T01:11:42+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T01:08:10.868540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:08:10.868540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:08:10.868540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T01:08:10.868540(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
27e5628c0f1c5bec5b12b121bcb618febe5a1ad8 |
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03-128k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.03-128k](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03-128k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:11:36.377069](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03-128k/blob/main/results_2023-12-11T01-11-36.377069.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6308159681070862,
"acc_stderr": 0.03241075436743821,
"acc_norm": 0.6340939515561961,
"acc_norm_stderr": 0.03306369804375197,
"mc1": 0.35495716034271724,
"mc1_stderr": 0.0167508623813759,
"mc2": 0.5116101369020933,
"mc2_stderr": 0.015317262236176729
},
"harness|arc:challenge|25": {
"acc": 0.6015358361774744,
"acc_stderr": 0.014306946052735565,
"acc_norm": 0.6467576791808873,
"acc_norm_stderr": 0.013967822714840056
},
"harness|hellaswag|10": {
"acc": 0.6435968930491934,
"acc_stderr": 0.004779574402771385,
"acc_norm": 0.8456482772356104,
"acc_norm_stderr": 0.0036054721167622923
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.042446332383532265,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.042446332383532265
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.038781398887976104,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.038781398887976104
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.049020713000019756,
"acc_norm": 0.61,
"acc_norm_stderr": 0.049020713000019756
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.025591857761382182,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.025591857761382182
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.02390491431178265,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.02390491431178265
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.03158415324047712,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.03158415324047712
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593556,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593556
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6435897435897436,
"acc_stderr": 0.024283140529467305,
"acc_norm": 0.6435897435897436,
"acc_norm_stderr": 0.024283140529467305
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.028037929969114996,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.028037929969114996
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.01584825580650154,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.01584825580650154
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8186274509803921,
"acc_stderr": 0.027044621719474086,
"acc_norm": 0.8186274509803921,
"acc_norm_stderr": 0.027044621719474086
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233504,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233504
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7040358744394619,
"acc_stderr": 0.030636591348699803,
"acc_norm": 0.7040358744394619,
"acc_norm_stderr": 0.030636591348699803
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516304,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516304
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.047184714852195886,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.047184714852195886
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.02363687331748928,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.02363687331748928
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8212005108556832,
"acc_stderr": 0.013702643715368985,
"acc_norm": 0.8212005108556832,
"acc_norm_stderr": 0.013702643715368985
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31843575418994413,
"acc_stderr": 0.015581008080360276,
"acc_norm": 0.31843575418994413,
"acc_norm_stderr": 0.015581008080360276
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7026143790849673,
"acc_stderr": 0.02617390850671858,
"acc_norm": 0.7026143790849673,
"acc_norm_stderr": 0.02617390850671858
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.02465968518596728,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.02465968518596728
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873862,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873862
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4661016949152542,
"acc_stderr": 0.012740853872949832,
"acc_norm": 0.4661016949152542,
"acc_norm_stderr": 0.012740853872949832
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.029029422815681397,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.029029422815681397
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6552287581699346,
"acc_stderr": 0.019228322018696644,
"acc_norm": 0.6552287581699346,
"acc_norm_stderr": 0.019228322018696644
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.689795918367347,
"acc_stderr": 0.029613459872484375,
"acc_norm": 0.689795918367347,
"acc_norm_stderr": 0.029613459872484375
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35495716034271724,
"mc1_stderr": 0.0167508623813759,
"mc2": 0.5116101369020933,
"mc2_stderr": 0.015317262236176729
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989248
},
"harness|gsm8k|5": {
"acc": 0.5049279757391963,
"acc_stderr": 0.01377181577547058
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03-128k | [
"region:us"
] | 2023-12-11T01:14:28+00:00 | {"pretty_name": "Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k", "dataset_summary": "Dataset automatically created during the evaluation run of model [perlthoughts/Chupacabra-7B-v2.03-128k](https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03-128k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T01:11:36.377069](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.03-128k/blob/main/results_2023-12-11T01-11-36.377069.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6308159681070862,\n \"acc_stderr\": 0.03241075436743821,\n \"acc_norm\": 0.6340939515561961,\n \"acc_norm_stderr\": 0.03306369804375197,\n \"mc1\": 0.35495716034271724,\n \"mc1_stderr\": 0.0167508623813759,\n \"mc2\": 0.5116101369020933,\n \"mc2_stderr\": 0.015317262236176729\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6015358361774744,\n \"acc_stderr\": 0.014306946052735565,\n \"acc_norm\": 0.6467576791808873,\n \"acc_norm_stderr\": 0.013967822714840056\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6435968930491934,\n \"acc_stderr\": 0.004779574402771385,\n \"acc_norm\": 0.8456482772356104,\n \"acc_norm_stderr\": 0.0036054721167622923\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.042446332383532265,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.042446332383532265\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.038781398887976104,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.038781398887976104\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.049020713000019756,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.049020713000019756\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.025591857761382182,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.025591857761382182\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.02390491431178265,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.02390491431178265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047712,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047712\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593556,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593556\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6435897435897436,\n \"acc_stderr\": 0.024283140529467305,\n \"acc_norm\": 0.6435897435897436,\n \"acc_norm_stderr\": 0.024283140529467305\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114996,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114996\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650154,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650154\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8186274509803921,\n \"acc_stderr\": 0.027044621719474086,\n \"acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.027044621719474086\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233504,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233504\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n \"acc_stderr\": 0.030636591348699803,\n \"acc_norm\": 0.7040358744394619,\n \"acc_norm_stderr\": 0.030636591348699803\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516304,\n \"acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516304\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.047184714852195886,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.047184714852195886\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.02363687331748928,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.02363687331748928\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n \"acc_stderr\": 0.013702643715368985,\n \"acc_norm\": 0.8212005108556832,\n \"acc_norm_stderr\": 0.013702643715368985\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31843575418994413,\n \"acc_stderr\": 0.015581008080360276,\n \"acc_norm\": 0.31843575418994413,\n \"acc_norm_stderr\": 0.015581008080360276\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7026143790849673,\n \"acc_stderr\": 0.02617390850671858,\n \"acc_norm\": 0.7026143790849673,\n \"acc_norm_stderr\": 0.02617390850671858\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.02465968518596728,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.02465968518596728\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873862,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873862\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4661016949152542,\n \"acc_stderr\": 0.012740853872949832,\n \"acc_norm\": 0.4661016949152542,\n \"acc_norm_stderr\": 0.012740853872949832\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.029029422815681397,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.029029422815681397\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6552287581699346,\n \"acc_stderr\": 0.019228322018696644,\n \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.019228322018696644\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.689795918367347,\n \"acc_stderr\": 0.029613459872484375,\n \"acc_norm\": 0.689795918367347,\n \"acc_norm_stderr\": 0.029613459872484375\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35495716034271724,\n \"mc1_stderr\": 0.0167508623813759,\n \"mc2\": 0.5116101369020933,\n \"mc2_stderr\": 0.015317262236176729\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989248\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5049279757391963,\n \"acc_stderr\": 0.01377181577547058\n }\n}\n```", "repo_url": "https://huggingface.co/perlthoughts/Chupacabra-7B-v2.03-128k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-11-36.377069.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["**/details_harness|winogrande|5_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T01-11-36.377069.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T01_11_36.377069", "path": ["results_2023-12-11T01-11-36.377069.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T01-11-36.377069.parquet"]}]}]} | 2023-12-11T01:15:11+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03-128k on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T01:11:36.377069(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:11:36.377069(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:11:36.377069(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
26,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of perlthoughts/Chupacabra-7B-v2.03-128k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model perlthoughts/Chupacabra-7B-v2.03-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T01:11:36.377069(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
49eac05711dbb74b2fe176f899ae39e4fe79ea55 | 데이터셋 HuggingFaceH4/ultrafeedback_binarized를 한영 번역 모델인 squarelike/Gugugo-koen를 이용하여 번역함.\n\n
https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized
https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ | hankang2023/Ultrafeedback_binarized.ko.hankang | [
"license:mit",
"region:us"
] | 2023-12-11T01:18:54+00:00 | {"license": "mit"} | 2023-12-13T01:58:15+00:00 | [] | [] | TAGS
#license-mit #region-us
| 데이터셋 HuggingFaceH4/ultrafeedback_binarized를 한영 번역 모델인 squarelike/Gugugo-koen를 이용하여 번역함.\n\n
URL
URL | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
814c8cfcfbe964ccd41c46652a0acf1190163bc5 | **Status:** Archive (code is provided as-is, no updates expected)
# Jukebox
Code for "Jukebox: A Generative Model for Music"
[Paper](https://arxiv.org/abs/2005.00341)
[Blog](https://openai.com/blog/jukebox)
[Explorer](http://jukebox.openai.com/)
[Colab](https://colab.research.google.com/github/openai/jukebox/blob/master/jukebox/Interacting_with_Jukebox.ipynb)
# Install
Install the conda package manager from https://docs.conda.io/en/latest/miniconda.html
```
# Required: Sampling
conda create --name jukebox python=3.7.5
conda activate jukebox
conda install mpi4py=3.0.3 # if this fails, try: pip install mpi4py==3.0.3
conda install pytorch=1.4 torchvision=0.5 cudatoolkit=10.0 -c pytorch
git clone https://github.com/openai/jukebox.git
cd jukebox
pip install -r requirements.txt
pip install -e .
# Required: Training
conda install av=7.0.01 -c conda-forge
pip install ./tensorboardX
# Optional: Apex for faster training with fused_adam
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=10.0 -c pytorch
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
```
# Sampling
## Sampling from scratch
To sample normally, run the following command. Model can be `5b`, `5b_lyrics`, `1b_lyrics`
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
```
python jukebox/sample.py --model=1b_lyrics --name=sample_1b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=16 --hop_fraction=0.5,0.5,0.125
```
The above generates the first `sample_length_in_seconds` seconds of audio from a song of total length `total_sample_length_in_seconds`.
To use multiple GPU's, launch the above scripts as `mpiexec -n {ngpus} python jukebox/sample.py ...` so they use `{ngpus}`
The samples decoded from each level are stored in `{name}/level_{level}`.
You can also view the samples as an html with the aligned lyrics under `{name}/level_{level}/index.html`.
Run `python -m http.server` and open the html through the server to see the lyrics animate as the song plays.
A summary of all sampling data including zs, x, labels and sampling_kwargs is stored in `{name}/level_{level}/data.pth.tar`.
The hps are for a V100 GPU with 16 GB GPU memory. The `1b_lyrics`, `5b`, and `5b_lyrics` top-level priors take up
3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB
for `1b_lyrics` and 1 GB for `5b_lyrics` per sample. If you are having trouble with CUDA OOM issues, try `1b_lyrics` or
decrease `max_batch_size` in sample.py, and `--n_samples` in the script call.
On a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use `n_samples > 1` so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like `--n_samples 15` for `5b_lyrics`. This will make the top-level generate samples in groups of three while upsampling is done in one pass.
To continue sampling from already generated codes for a longer duration, you can run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=continue \
--codes_file=sample_5b/level_0/data.pth.tar --sample_length_in_seconds=40 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
Here, we take the 20 seconds samples saved from the first sampling run at `sample_5b/level_0/data.pth.tar` and continue by adding 20 more seconds.
You could also continue directly from the level 2 saved outputs, just pass `--codes_file=sample_5b/level_2/data.pth.tar`.
Note this will upsample the full 40 seconds song at the end.
If you stopped sampling at only the first level and want to upsample the saved codes, you can run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=upsample \
--codes_file=sample_5b/level_2/data.pth.tar --sample_length_in_seconds=20 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
Here, we take the 20 seconds samples saved from the first sampling run at `sample_5b/level_2/data.pth.tar` and upsample the lower two levels.
## Prompt with your own music
If you want to prompt the model with your own creative piece or any other music, first save them as wave files and run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b_prompted --levels=3 --mode=primed \
--audio_file=path/to/recording.wav,awesome-mix.wav,fav-song.wav,etc.wav --prompt_length_in_seconds=12 \
--sample_length_in_seconds=20 --total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
This will load the four files, tile them to fill up to `n_samples` batch size, and prime the model with the first `prompt_length_in_seconds` seconds.
# Training
## VQVAE
To train a small vqvae, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae --name=small_vqvae --sample_length=262144 --bs=4 \
--audio_files_dir={audio_files_dir} --labels=False --train --aug_shift --aug_blend
```
Here, `{audio_files_dir}` is the directory in which you can put the audio files for your dataset, and `{ngpus}` is number of GPU's you want to use to train.
The above trains a two-level VQ-VAE with `downs_t = (5,3)`, and `strides_t = (2, 2)` meaning we downsample the audio by `2**5 = 32` to get the first level of codes, and `2**8 = 256` to get the second level codes.
Checkpoints are stored in the `logs` folder. You can monitor the training by running Tensorboard
```
tensorboard --logdir logs
```
## Prior
### Train prior or upsamplers
Once the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes.
To train the top-level prior, we can run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_prior,all_fp16,cpu_ema --name=small_prior \
--sample_length=2097152 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=1 --weight_decay=0.01 --save_iters=1000
```
To train the upsampler, we can run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_upsampler,all_fp16,cpu_ema --name=small_upsampler \
--sample_length=262144 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=0 --weight_decay=0.01 --save_iters=1000
```
We pass `sample_length = n_ctx * downsample_of_level` so that after downsampling the tokens match the n_ctx of the prior hps.
Here, `n_ctx = 8192` and `downsamples = (32, 256)`, giving `sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)` respectively for the bottom and top level.
### Learning rate annealing
To get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest
checkpoint and run with
```
--restore_prior="path/to/checkpoint" --lr_use_linear_decay --lr_start_linear_decay={already_trained_steps} --lr_decay={decay_steps_as_needed}
```
### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.
#### Train without labels
Our pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers
can upsample them back to audio that sound very similar to the original audio.
To re-use these for a new dataset of your choice, you can retrain just the top-level
To train top-level on a new dataset, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=False --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
Training the `small_prior` with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).
Near the end of training, follow [this](#learning-rate-annealing) to anneal the learning rate to 0
#### Sample from new model
You can then run sample.py with the top-level of our models replaced by your new model. To do so,
- Add an entry `my_model=("vqvae", "upsampler_level_0", "upsampler_level_1", "small_prior")` in `MODELS` in `make_models.py`.
- Update the `small_prior` dictionary in `hparams.py` to include `restore_prior='path/to/checkpoint'`. If you
you changed any hps directly in the command line script (eg:`heads`), make sure to update them in the dictionary too so
that `make_models` restores our checkpoint correctly.
- Run sample.py as outlined in the sampling section, but now with `--model=my_model`
For example, let's say we trained `small_vqvae`, `small_prior`, and `small_upsampler` under `/path/to/jukebox/logs`. In `make_models.py`, we are going to declare a tuple of the new models as `my_model`.
```
MODELS = {
'5b': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b"),
'5b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b_lyrics"),
'1b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_1b_lyrics"),
'my_model': ("my_small_vqvae", "my_small_upsampler", "my_small_prior"),
}
```
Next, in `hparams.py`, we add them to the registry with the corresponding `restore_`paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where `prior.prior.transformer._attn_mods[layer].attn_func` is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!
```
my_small_vqvae = Hyperparams(
restore_vqvae='/path/to/jukebox/logs/small_vqvae/checkpoint_some_step.pth.tar',
)
my_small_vqvae.update(small_vqvae)
HPARAMS_REGISTRY["my_small_vqvae"] = my_small_vqvae
my_small_prior = Hyperparams(
restore_prior='/path/to/jukebox/logs/small_prior/checkpoint_latest.pth.tar',
level=1,
labels=False,
# TODO For the two lines below, if `--labels` was used and the model is
# trained with lyrics, find and enter the layer, head pair that has learned
# alignment.
alignment_layer=47,
alignment_head=0,
)
my_small_prior.update(small_prior)
HPARAMS_REGISTRY["my_small_prior"] = my_small_prior
my_small_upsampler = Hyperparams(
restore_prior='/path/to/jukebox/logs/small_upsampler/checkpoint_latest.pth.tar',
level=0,
labels=False,
)
my_small_upsampler.update(small_upsampler)
HPARAMS_REGISTRY["my_small_upsampler"] = my_small_upsampler
```
#### Train with labels
To train with you own metadata for your audio files, implement `get_metadata` in `data/files_dataset.py` to return the
`artist`, `genre` and `lyrics` for a given audio file. For now, you can pass `''` for lyrics to not use any lyrics.
For training with labels, we'll use `small_labelled_prior` in `hparams.py`, and we set `labels=True,labels_v3=True`.
We use 2 kinds of labels information:
- Artist/Genre:
- For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id
is that in v2, we split genres like `blues_rock` into a bag of words `[blues, rock]`, and we pass atmost
`max_bow_genre_size` of those, in `v3` we consider it as a single word and just set `max_bow_genre_size=1`.
- Update the `v3_artist_ids` and `v3_genre_ids` to use ids from your new dataset.
- In `small_labelled_prior`, set the hps `y_bins = (number_of_genres, number_of_artists)` and `max_bow_genre_size=1`.
- Timing:
- For each chunk of audio, we return the `total_length` of the song, the `offset` the current audio chunk is at and
the `sample_length` of the audio chunk. We have three timing embeddings: total_length, our current position, and our
current position as a fraction of the total length, and we divide the range of these values into `t_bins` discrete bins.
- In `small_labelled_prior`, set the hps `min_duration` and `max_duration` to be the shortest/longest duration of audio
files you want for your dataset, and `t_bins` for how many bins you want to discretize timing information into. Note
`min_duration * sr` needs to be at least `sample_length` to have an audio chunk in it.
After these modifications, to train a top-level with labels, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_labelled_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior_labels \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
For sampling, follow same instructions as [above](#sample-from-new-model) but use `small_labelled_prior` instead of `small_prior`.
#### Train with lyrics
To train in addition with lyrics, update `get_metadata` in `data/files_dataset.py` to return `lyrics` too.
For training with lyrics, we'll use `small_single_enc_dec_prior` in `hparams.py`.
- Lyrics:
- For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to
the midpoint of our audio chunk, and pass a window of `n_tokens` lyric characters centred around that.
- In `small_single_enc_dec_prior`, set the hps `use_tokens=True` and `n_tokens` to be the number of lyric characters
to use for an audio chunk. Set it according to the `sample_length` you're training on so that its large enough that
the lyrics for an audio chunk are almost always found inside a window of that size.
- If you use a non-English vocabulary, update `text_processor.py` with your new vocab and set
`n_vocab = number of characters in vocabulary` accordingly in `small_single_enc_dec_prior`. In v2, we had a `n_vocab=80`
and in v3 we missed `+` and so `n_vocab=79` of characters.
After these modifications, to train a top-level with labels and lyrics, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_single_enc_dec_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_single_enc_dec_prior_labels \
--sample_length=786432 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
To simplify hps choices, here we used a `single_enc_dec` model like the `1b_lyrics` model that combines both encoder and
decoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single
larger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length `n_ctx + n_tokens`.
This uses `attn_order=12` which includes `prime_attention` layers with keys/values from lyrics and queries from audio.
If you instead want to use a model with the usual encoder-decoder style transformer, use `small_sep_enc_dec_prior`.
For sampling, follow same instructions as [above](#sample-from-new-model) but use `small_single_enc_dec_prior` instead of
`small_prior`. To also get the alignment between lyrics and samples in the saved html, you'll need to set `alignment_layer`
and `alignment_head` in `small_single_enc_dec_prior`. To find which layer/head is best to use, run a forward pass on a training example,
save the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment
pattern between the lyrics keys and music queries.
### Fine-tune pre-trained top-level prior to new style(s)
Previously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:
- Support `--labels=True` by implementing `get_metadata` in `jukebox/data/files_dataset.py` for your dataset.
- Add new entries in `jukebox/data/ids`. We recommend replacing existing mappings (e.g. rename `"unknown"`, etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.
After these modifications, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,prior_1b_lyrics,all_fp16,cpu_ema --name=finetuned \
--sample_length=1048576 --bs=1 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
To get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.
# Citation
Please cite using the following bibtex entry:
```
@article{dhariwal2020jukebox,
title={Jukebox: A Generative Model for Music},
author={Dhariwal, Prafulla and Jun, Heewoo and Payne, Christine and Kim, Jong Wook and Radford, Alec and Sutskever, Ilya},
journal={arXiv preprint arXiv:2005.00341},
year={2020}
}
```
# License
[Noncommercial Use License](./LICENSE)
It covers both released code and weights.
| MasonCrinr/Masond | [
"arxiv:2005.00341",
"region:us"
] | 2023-12-11T01:21:37+00:00 | {} | 2023-12-11T01:21:57+00:00 | [
"2005.00341"
] | [] | TAGS
#arxiv-2005.00341 #region-us
| Status: Archive (code is provided as-is, no updates expected)
# Jukebox
Code for "Jukebox: A Generative Model for Music"
Paper
Blog
Explorer
Colab
# Install
Install the conda package manager from URL
# Sampling
## Sampling from scratch
To sample normally, run the following command. Model can be '5b', '5b_lyrics', '1b_lyrics'
The above generates the first 'sample_length_in_seconds' seconds of audio from a song of total length 'total_sample_length_in_seconds'.
To use multiple GPU's, launch the above scripts as 'mpiexec -n {ngpus} python jukebox/URL ...' so they use '{ngpus}'
The samples decoded from each level are stored in '{name}/level_{level}'.
You can also view the samples as an html with the aligned lyrics under '{name}/level_{level}/URL'.
Run 'python -m URL' and open the html through the server to see the lyrics animate as the song plays.
A summary of all sampling data including zs, x, labels and sampling_kwargs is stored in '{name}/level_{level}/URL'.
The hps are for a V100 GPU with 16 GB GPU memory. The '1b_lyrics', '5b', and '5b_lyrics' top-level priors take up
3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB
for '1b_lyrics' and 1 GB for '5b_lyrics' per sample. If you are having trouble with CUDA OOM issues, try '1b_lyrics' or
decrease 'max_batch_size' in URL, and '--n_samples' in the script call.
On a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use 'n_samples > 1' so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like '--n_samples 15' for '5b_lyrics'. This will make the top-level generate samples in groups of three while upsampling is done in one pass.
To continue sampling from already generated codes for a longer duration, you can run
Here, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_0/URL' and continue by adding 20 more seconds.
You could also continue directly from the level 2 saved outputs, just pass '--codes_file=sample_5b/level_2/URL'.
Note this will upsample the full 40 seconds song at the end.
If you stopped sampling at only the first level and want to upsample the saved codes, you can run
Here, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_2/URL' and upsample the lower two levels.
## Prompt with your own music
If you want to prompt the model with your own creative piece or any other music, first save them as wave files and run
This will load the four files, tile them to fill up to 'n_samples' batch size, and prime the model with the first 'prompt_length_in_seconds' seconds.
# Training
## VQVAE
To train a small vqvae, run
Here, '{audio_files_dir}' is the directory in which you can put the audio files for your dataset, and '{ngpus}' is number of GPU's you want to use to train.
The above trains a two-level VQ-VAE with 'downs_t = (5,3)', and 'strides_t = (2, 2)' meaning we downsample the audio by '25 = 32' to get the first level of codes, and '28 = 256' to get the second level codes.
Checkpoints are stored in the 'logs' folder. You can monitor the training by running Tensorboard
## Prior
### Train prior or upsamplers
Once the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes.
To train the top-level prior, we can run
To train the upsampler, we can run
We pass 'sample_length = n_ctx * downsample_of_level' so that after downsampling the tokens match the n_ctx of the prior hps.
Here, 'n_ctx = 8192' and 'downsamples = (32, 256)', giving 'sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)' respectively for the bottom and top level.
### Learning rate annealing
To get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest
checkpoint and run with
### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.
#### Train without labels
Our pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers
can upsample them back to audio that sound very similar to the original audio.
To re-use these for a new dataset of your choice, you can retrain just the top-level
To train top-level on a new dataset, run
Training the 'small_prior' with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).
Near the end of training, follow this to anneal the learning rate to 0
#### Sample from new model
You can then run URL with the top-level of our models replaced by your new model. To do so,
- Add an entry 'my_model=("vqvae", "upsampler_level_0", "upsampler_level_1", "small_prior")' in 'MODELS' in 'make_models.py'.
- Update the 'small_prior' dictionary in 'URL' to include 'restore_prior='path/to/checkpoint''. If you
you changed any hps directly in the command line script (eg:'heads'), make sure to update them in the dictionary too so
that 'make_models' restores our checkpoint correctly.
- Run URL as outlined in the sampling section, but now with '--model=my_model'
For example, let's say we trained 'small_vqvae', 'small_prior', and 'small_upsampler' under '/path/to/jukebox/logs'. In 'make_models.py', we are going to declare a tuple of the new models as 'my_model'.
Next, in 'URL', we add them to the registry with the corresponding 'restore_'paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where 'URL.transformer._attn_mods[layer].attn_func' is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!
#### Train with labels
To train with you own metadata for your audio files, implement 'get_metadata' in 'data/files_dataset.py' to return the
'artist', 'genre' and 'lyrics' for a given audio file. For now, you can pass '''' for lyrics to not use any lyrics.
For training with labels, we'll use 'small_labelled_prior' in 'URL', and we set 'labels=True,labels_v3=True'.
We use 2 kinds of labels information:
- Artist/Genre:
- For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id
is that in v2, we split genres like 'blues_rock' into a bag of words '[blues, rock]', and we pass atmost
'max_bow_genre_size' of those, in 'v3' we consider it as a single word and just set 'max_bow_genre_size=1'.
- Update the 'v3_artist_ids' and 'v3_genre_ids' to use ids from your new dataset.
- In 'small_labelled_prior', set the hps 'y_bins = (number_of_genres, number_of_artists)' and 'max_bow_genre_size=1'.
- Timing:
- For each chunk of audio, we return the 'total_length' of the song, the 'offset' the current audio chunk is at and
the 'sample_length' of the audio chunk. We have three timing embeddings: total_length, our current position, and our
current position as a fraction of the total length, and we divide the range of these values into 't_bins' discrete bins.
- In 'small_labelled_prior', set the hps 'min_duration' and 'max_duration' to be the shortest/longest duration of audio
files you want for your dataset, and 't_bins' for how many bins you want to discretize timing information into. Note
'min_duration * sr' needs to be at least 'sample_length' to have an audio chunk in it.
After these modifications, to train a top-level with labels, run
For sampling, follow same instructions as above but use 'small_labelled_prior' instead of 'small_prior'.
#### Train with lyrics
To train in addition with lyrics, update 'get_metadata' in 'data/files_dataset.py' to return 'lyrics' too.
For training with lyrics, we'll use 'small_single_enc_dec_prior' in 'URL'.
- Lyrics:
- For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to
the midpoint of our audio chunk, and pass a window of 'n_tokens' lyric characters centred around that.
- In 'small_single_enc_dec_prior', set the hps 'use_tokens=True' and 'n_tokens' to be the number of lyric characters
to use for an audio chunk. Set it according to the 'sample_length' you're training on so that its large enough that
the lyrics for an audio chunk are almost always found inside a window of that size.
- If you use a non-English vocabulary, update 'text_processor.py' with your new vocab and set
'n_vocab = number of characters in vocabulary' accordingly in 'small_single_enc_dec_prior'. In v2, we had a 'n_vocab=80'
and in v3 we missed '+' and so 'n_vocab=79' of characters.
After these modifications, to train a top-level with labels and lyrics, run
To simplify hps choices, here we used a 'single_enc_dec' model like the '1b_lyrics' model that combines both encoder and
decoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single
larger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length 'n_ctx + n_tokens'.
This uses 'attn_order=12' which includes 'prime_attention' layers with keys/values from lyrics and queries from audio.
If you instead want to use a model with the usual encoder-decoder style transformer, use 'small_sep_enc_dec_prior'.
For sampling, follow same instructions as above but use 'small_single_enc_dec_prior' instead of
'small_prior'. To also get the alignment between lyrics and samples in the saved html, you'll need to set 'alignment_layer'
and 'alignment_head' in 'small_single_enc_dec_prior'. To find which layer/head is best to use, run a forward pass on a training example,
save the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment
pattern between the lyrics keys and music queries.
### Fine-tune pre-trained top-level prior to new style(s)
Previously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:
- Support '--labels=True' by implementing 'get_metadata' in 'jukebox/data/files_dataset.py' for your dataset.
- Add new entries in 'jukebox/data/ids'. We recommend replacing existing mappings (e.g. rename '"unknown"', etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.
After these modifications, run
To get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.
Please cite using the following bibtex entry:
# License
Noncommercial Use License
It covers both released code and weights.
| [
"# Jukebox\nCode for \"Jukebox: A Generative Model for Music\"\n\nPaper \nBlog \nExplorer \nColab",
"# Install\nInstall the conda package manager from URL",
"# Sampling",
"## Sampling from scratch\nTo sample normally, run the following command. Model can be '5b', '5b_lyrics', '1b_lyrics'\n\n\nThe above generates the first 'sample_length_in_seconds' seconds of audio from a song of total length 'total_sample_length_in_seconds'.\nTo use multiple GPU's, launch the above scripts as 'mpiexec -n {ngpus} python jukebox/URL ...' so they use '{ngpus}'\n\nThe samples decoded from each level are stored in '{name}/level_{level}'. \nYou can also view the samples as an html with the aligned lyrics under '{name}/level_{level}/URL'. \nRun 'python -m URL' and open the html through the server to see the lyrics animate as the song plays. \nA summary of all sampling data including zs, x, labels and sampling_kwargs is stored in '{name}/level_{level}/URL'.\n\nThe hps are for a V100 GPU with 16 GB GPU memory. The '1b_lyrics', '5b', and '5b_lyrics' top-level priors take up \n3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB \nfor '1b_lyrics' and 1 GB for '5b_lyrics' per sample. If you are having trouble with CUDA OOM issues, try '1b_lyrics' or \ndecrease 'max_batch_size' in URL, and '--n_samples' in the script call.\n\nOn a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use 'n_samples > 1' so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like '--n_samples 15' for '5b_lyrics'. This will make the top-level generate samples in groups of three while upsampling is done in one pass.\n\nTo continue sampling from already generated codes for a longer duration, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_0/URL' and continue by adding 20 more seconds. \n\nYou could also continue directly from the level 2 saved outputs, just pass '--codes_file=sample_5b/level_2/URL'.\n Note this will upsample the full 40 seconds song at the end.\n\nIf you stopped sampling at only the first level and want to upsample the saved codes, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_2/URL' and upsample the lower two levels.",
"## Prompt with your own music\nIf you want to prompt the model with your own creative piece or any other music, first save them as wave files and run\n\nThis will load the four files, tile them to fill up to 'n_samples' batch size, and prime the model with the first 'prompt_length_in_seconds' seconds.",
"# Training",
"## VQVAE\nTo train a small vqvae, run\n\nHere, '{audio_files_dir}' is the directory in which you can put the audio files for your dataset, and '{ngpus}' is number of GPU's you want to use to train. \nThe above trains a two-level VQ-VAE with 'downs_t = (5,3)', and 'strides_t = (2, 2)' meaning we downsample the audio by '25 = 32' to get the first level of codes, and '28 = 256' to get the second level codes. \nCheckpoints are stored in the 'logs' folder. You can monitor the training by running Tensorboard",
"## Prior",
"### Train prior or upsamplers\nOnce the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes. \nTo train the top-level prior, we can run\n\n\n\nTo train the upsampler, we can run\n\nWe pass 'sample_length = n_ctx * downsample_of_level' so that after downsampling the tokens match the n_ctx of the prior hps. \nHere, 'n_ctx = 8192' and 'downsamples = (32, 256)', giving 'sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)' respectively for the bottom and top level.",
"### Learning rate annealing\nTo get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest \ncheckpoint and run with",
"### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.",
"#### Train without labels\nOur pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers \ncan upsample them back to audio that sound very similar to the original audio.\nTo re-use these for a new dataset of your choice, you can retrain just the top-level \n\nTo train top-level on a new dataset, run\n\nTraining the 'small_prior' with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).\n\nNear the end of training, follow this to anneal the learning rate to 0",
"#### Sample from new model\nYou can then run URL with the top-level of our models replaced by your new model. To do so,\n- Add an entry 'my_model=(\"vqvae\", \"upsampler_level_0\", \"upsampler_level_1\", \"small_prior\")' in 'MODELS' in 'make_models.py'. \n- Update the 'small_prior' dictionary in 'URL' to include 'restore_prior='path/to/checkpoint''. If you\nyou changed any hps directly in the command line script (eg:'heads'), make sure to update them in the dictionary too so \nthat 'make_models' restores our checkpoint correctly.\n- Run URL as outlined in the sampling section, but now with '--model=my_model' \n\nFor example, let's say we trained 'small_vqvae', 'small_prior', and 'small_upsampler' under '/path/to/jukebox/logs'. In 'make_models.py', we are going to declare a tuple of the new models as 'my_model'.\n\n\nNext, in 'URL', we add them to the registry with the corresponding 'restore_'paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where 'URL.transformer._attn_mods[layer].attn_func' is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!",
"#### Train with labels \nTo train with you own metadata for your audio files, implement 'get_metadata' in 'data/files_dataset.py' to return the \n'artist', 'genre' and 'lyrics' for a given audio file. For now, you can pass '''' for lyrics to not use any lyrics.\n\nFor training with labels, we'll use 'small_labelled_prior' in 'URL', and we set 'labels=True,labels_v3=True'. \nWe use 2 kinds of labels information:\n- Artist/Genre: \n - For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id \n is that in v2, we split genres like 'blues_rock' into a bag of words '[blues, rock]', and we pass atmost \n 'max_bow_genre_size' of those, in 'v3' we consider it as a single word and just set 'max_bow_genre_size=1'.\n - Update the 'v3_artist_ids' and 'v3_genre_ids' to use ids from your new dataset. \n - In 'small_labelled_prior', set the hps 'y_bins = (number_of_genres, number_of_artists)' and 'max_bow_genre_size=1'. \n- Timing: \n - For each chunk of audio, we return the 'total_length' of the song, the 'offset' the current audio chunk is at and \n the 'sample_length' of the audio chunk. We have three timing embeddings: total_length, our current position, and our \n current position as a fraction of the total length, and we divide the range of these values into 't_bins' discrete bins. \n - In 'small_labelled_prior', set the hps 'min_duration' and 'max_duration' to be the shortest/longest duration of audio \n files you want for your dataset, and 't_bins' for how many bins you want to discretize timing information into. Note \n 'min_duration * sr' needs to be at least 'sample_length' to have an audio chunk in it.\n\nAfter these modifications, to train a top-level with labels, run\n\n\nFor sampling, follow same instructions as above but use 'small_labelled_prior' instead of 'small_prior'.",
"#### Train with lyrics\nTo train in addition with lyrics, update 'get_metadata' in 'data/files_dataset.py' to return 'lyrics' too.\nFor training with lyrics, we'll use 'small_single_enc_dec_prior' in 'URL'. \n- Lyrics: \n - For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to \n the midpoint of our audio chunk, and pass a window of 'n_tokens' lyric characters centred around that. \n - In 'small_single_enc_dec_prior', set the hps 'use_tokens=True' and 'n_tokens' to be the number of lyric characters \n to use for an audio chunk. Set it according to the 'sample_length' you're training on so that its large enough that \n the lyrics for an audio chunk are almost always found inside a window of that size.\n - If you use a non-English vocabulary, update 'text_processor.py' with your new vocab and set\n 'n_vocab = number of characters in vocabulary' accordingly in 'small_single_enc_dec_prior'. In v2, we had a 'n_vocab=80' \n and in v3 we missed '+' and so 'n_vocab=79' of characters. \n\nAfter these modifications, to train a top-level with labels and lyrics, run\n\nTo simplify hps choices, here we used a 'single_enc_dec' model like the '1b_lyrics' model that combines both encoder and \ndecoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single \nlarger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length 'n_ctx + n_tokens'. \nThis uses 'attn_order=12' which includes 'prime_attention' layers with keys/values from lyrics and queries from audio. \nIf you instead want to use a model with the usual encoder-decoder style transformer, use 'small_sep_enc_dec_prior'.\n\nFor sampling, follow same instructions as above but use 'small_single_enc_dec_prior' instead of \n'small_prior'. To also get the alignment between lyrics and samples in the saved html, you'll need to set 'alignment_layer' \nand 'alignment_head' in 'small_single_enc_dec_prior'. To find which layer/head is best to use, run a forward pass on a training example,\nsave the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment \npattern between the lyrics keys and music queries.",
"### Fine-tune pre-trained top-level prior to new style(s)\nPreviously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:\n\n- Support '--labels=True' by implementing 'get_metadata' in 'jukebox/data/files_dataset.py' for your dataset.\n- Add new entries in 'jukebox/data/ids'. We recommend replacing existing mappings (e.g. rename '\"unknown\"', etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.\n\nAfter these modifications, run \n\nTo get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.\n\nPlease cite using the following bibtex entry:",
"# License \nNoncommercial Use License \n\nIt covers both released code and weights."
] | [
"TAGS\n#arxiv-2005.00341 #region-us \n",
"# Jukebox\nCode for \"Jukebox: A Generative Model for Music\"\n\nPaper \nBlog \nExplorer \nColab",
"# Install\nInstall the conda package manager from URL",
"# Sampling",
"## Sampling from scratch\nTo sample normally, run the following command. Model can be '5b', '5b_lyrics', '1b_lyrics'\n\n\nThe above generates the first 'sample_length_in_seconds' seconds of audio from a song of total length 'total_sample_length_in_seconds'.\nTo use multiple GPU's, launch the above scripts as 'mpiexec -n {ngpus} python jukebox/URL ...' so they use '{ngpus}'\n\nThe samples decoded from each level are stored in '{name}/level_{level}'. \nYou can also view the samples as an html with the aligned lyrics under '{name}/level_{level}/URL'. \nRun 'python -m URL' and open the html through the server to see the lyrics animate as the song plays. \nA summary of all sampling data including zs, x, labels and sampling_kwargs is stored in '{name}/level_{level}/URL'.\n\nThe hps are for a V100 GPU with 16 GB GPU memory. The '1b_lyrics', '5b', and '5b_lyrics' top-level priors take up \n3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB \nfor '1b_lyrics' and 1 GB for '5b_lyrics' per sample. If you are having trouble with CUDA OOM issues, try '1b_lyrics' or \ndecrease 'max_batch_size' in URL, and '--n_samples' in the script call.\n\nOn a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use 'n_samples > 1' so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like '--n_samples 15' for '5b_lyrics'. This will make the top-level generate samples in groups of three while upsampling is done in one pass.\n\nTo continue sampling from already generated codes for a longer duration, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_0/URL' and continue by adding 20 more seconds. \n\nYou could also continue directly from the level 2 saved outputs, just pass '--codes_file=sample_5b/level_2/URL'.\n Note this will upsample the full 40 seconds song at the end.\n\nIf you stopped sampling at only the first level and want to upsample the saved codes, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_2/URL' and upsample the lower two levels.",
"## Prompt with your own music\nIf you want to prompt the model with your own creative piece or any other music, first save them as wave files and run\n\nThis will load the four files, tile them to fill up to 'n_samples' batch size, and prime the model with the first 'prompt_length_in_seconds' seconds.",
"# Training",
"## VQVAE\nTo train a small vqvae, run\n\nHere, '{audio_files_dir}' is the directory in which you can put the audio files for your dataset, and '{ngpus}' is number of GPU's you want to use to train. \nThe above trains a two-level VQ-VAE with 'downs_t = (5,3)', and 'strides_t = (2, 2)' meaning we downsample the audio by '25 = 32' to get the first level of codes, and '28 = 256' to get the second level codes. \nCheckpoints are stored in the 'logs' folder. You can monitor the training by running Tensorboard",
"## Prior",
"### Train prior or upsamplers\nOnce the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes. \nTo train the top-level prior, we can run\n\n\n\nTo train the upsampler, we can run\n\nWe pass 'sample_length = n_ctx * downsample_of_level' so that after downsampling the tokens match the n_ctx of the prior hps. \nHere, 'n_ctx = 8192' and 'downsamples = (32, 256)', giving 'sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)' respectively for the bottom and top level.",
"### Learning rate annealing\nTo get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest \ncheckpoint and run with",
"### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.",
"#### Train without labels\nOur pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers \ncan upsample them back to audio that sound very similar to the original audio.\nTo re-use these for a new dataset of your choice, you can retrain just the top-level \n\nTo train top-level on a new dataset, run\n\nTraining the 'small_prior' with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).\n\nNear the end of training, follow this to anneal the learning rate to 0",
"#### Sample from new model\nYou can then run URL with the top-level of our models replaced by your new model. To do so,\n- Add an entry 'my_model=(\"vqvae\", \"upsampler_level_0\", \"upsampler_level_1\", \"small_prior\")' in 'MODELS' in 'make_models.py'. \n- Update the 'small_prior' dictionary in 'URL' to include 'restore_prior='path/to/checkpoint''. If you\nyou changed any hps directly in the command line script (eg:'heads'), make sure to update them in the dictionary too so \nthat 'make_models' restores our checkpoint correctly.\n- Run URL as outlined in the sampling section, but now with '--model=my_model' \n\nFor example, let's say we trained 'small_vqvae', 'small_prior', and 'small_upsampler' under '/path/to/jukebox/logs'. In 'make_models.py', we are going to declare a tuple of the new models as 'my_model'.\n\n\nNext, in 'URL', we add them to the registry with the corresponding 'restore_'paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where 'URL.transformer._attn_mods[layer].attn_func' is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!",
"#### Train with labels \nTo train with you own metadata for your audio files, implement 'get_metadata' in 'data/files_dataset.py' to return the \n'artist', 'genre' and 'lyrics' for a given audio file. For now, you can pass '''' for lyrics to not use any lyrics.\n\nFor training with labels, we'll use 'small_labelled_prior' in 'URL', and we set 'labels=True,labels_v3=True'. \nWe use 2 kinds of labels information:\n- Artist/Genre: \n - For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id \n is that in v2, we split genres like 'blues_rock' into a bag of words '[blues, rock]', and we pass atmost \n 'max_bow_genre_size' of those, in 'v3' we consider it as a single word and just set 'max_bow_genre_size=1'.\n - Update the 'v3_artist_ids' and 'v3_genre_ids' to use ids from your new dataset. \n - In 'small_labelled_prior', set the hps 'y_bins = (number_of_genres, number_of_artists)' and 'max_bow_genre_size=1'. \n- Timing: \n - For each chunk of audio, we return the 'total_length' of the song, the 'offset' the current audio chunk is at and \n the 'sample_length' of the audio chunk. We have three timing embeddings: total_length, our current position, and our \n current position as a fraction of the total length, and we divide the range of these values into 't_bins' discrete bins. \n - In 'small_labelled_prior', set the hps 'min_duration' and 'max_duration' to be the shortest/longest duration of audio \n files you want for your dataset, and 't_bins' for how many bins you want to discretize timing information into. Note \n 'min_duration * sr' needs to be at least 'sample_length' to have an audio chunk in it.\n\nAfter these modifications, to train a top-level with labels, run\n\n\nFor sampling, follow same instructions as above but use 'small_labelled_prior' instead of 'small_prior'.",
"#### Train with lyrics\nTo train in addition with lyrics, update 'get_metadata' in 'data/files_dataset.py' to return 'lyrics' too.\nFor training with lyrics, we'll use 'small_single_enc_dec_prior' in 'URL'. \n- Lyrics: \n - For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to \n the midpoint of our audio chunk, and pass a window of 'n_tokens' lyric characters centred around that. \n - In 'small_single_enc_dec_prior', set the hps 'use_tokens=True' and 'n_tokens' to be the number of lyric characters \n to use for an audio chunk. Set it according to the 'sample_length' you're training on so that its large enough that \n the lyrics for an audio chunk are almost always found inside a window of that size.\n - If you use a non-English vocabulary, update 'text_processor.py' with your new vocab and set\n 'n_vocab = number of characters in vocabulary' accordingly in 'small_single_enc_dec_prior'. In v2, we had a 'n_vocab=80' \n and in v3 we missed '+' and so 'n_vocab=79' of characters. \n\nAfter these modifications, to train a top-level with labels and lyrics, run\n\nTo simplify hps choices, here we used a 'single_enc_dec' model like the '1b_lyrics' model that combines both encoder and \ndecoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single \nlarger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length 'n_ctx + n_tokens'. \nThis uses 'attn_order=12' which includes 'prime_attention' layers with keys/values from lyrics and queries from audio. \nIf you instead want to use a model with the usual encoder-decoder style transformer, use 'small_sep_enc_dec_prior'.\n\nFor sampling, follow same instructions as above but use 'small_single_enc_dec_prior' instead of \n'small_prior'. To also get the alignment between lyrics and samples in the saved html, you'll need to set 'alignment_layer' \nand 'alignment_head' in 'small_single_enc_dec_prior'. To find which layer/head is best to use, run a forward pass on a training example,\nsave the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment \npattern between the lyrics keys and music queries.",
"### Fine-tune pre-trained top-level prior to new style(s)\nPreviously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:\n\n- Support '--labels=True' by implementing 'get_metadata' in 'jukebox/data/files_dataset.py' for your dataset.\n- Add new entries in 'jukebox/data/ids'. We recommend replacing existing mappings (e.g. rename '\"unknown\"', etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.\n\nAfter these modifications, run \n\nTo get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.\n\nPlease cite using the following bibtex entry:",
"# License \nNoncommercial Use License \n\nIt covers both released code and weights."
] | [
14,
23,
10,
4,
727,
80,
2,
161,
2,
169,
40,
26,
193,
415,
589,
676,
248,
18
] | [
"passage: TAGS\n#arxiv-2005.00341 #region-us \n# Jukebox\nCode for \"Jukebox: A Generative Model for Music\"\n\nPaper \nBlog \nExplorer \nColab# Install\nInstall the conda package manager from URL# Sampling",
"passage: ## Sampling from scratch\nTo sample normally, run the following command. Model can be '5b', '5b_lyrics', '1b_lyrics'\n\n\nThe above generates the first 'sample_length_in_seconds' seconds of audio from a song of total length 'total_sample_length_in_seconds'.\nTo use multiple GPU's, launch the above scripts as 'mpiexec -n {ngpus} python jukebox/URL ...' so they use '{ngpus}'\n\nThe samples decoded from each level are stored in '{name}/level_{level}'. \nYou can also view the samples as an html with the aligned lyrics under '{name}/level_{level}/URL'. \nRun 'python -m URL' and open the html through the server to see the lyrics animate as the song plays. \nA summary of all sampling data including zs, x, labels and sampling_kwargs is stored in '{name}/level_{level}/URL'.\n\nThe hps are for a V100 GPU with 16 GB GPU memory. The '1b_lyrics', '5b', and '5b_lyrics' top-level priors take up \n3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB \nfor '1b_lyrics' and 1 GB for '5b_lyrics' per sample. If you are having trouble with CUDA OOM issues, try '1b_lyrics' or \ndecrease 'max_batch_size' in URL, and '--n_samples' in the script call.\n\nOn a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use 'n_samples > 1' so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like '--n_samples 15' for '5b_lyrics'. This will make the top-level generate samples in groups of three while upsampling is done in one pass.\n\nTo continue sampling from already generated codes for a longer duration, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_0/URL' and continue by adding 20 more seconds. \n\nYou could also continue directly from the level 2 saved outputs, just pass '--codes_file=sample_5b/level_2/URL'.\n Note this will upsample the full 40 seconds song at the end.\n\nIf you stopped sampling at only the first level and want to upsample the saved codes, you can run\n\nHere, we take the 20 seconds samples saved from the first sampling run at 'sample_5b/level_2/URL' and upsample the lower two levels.## Prompt with your own music\nIf you want to prompt the model with your own creative piece or any other music, first save them as wave files and run\n\nThis will load the four files, tile them to fill up to 'n_samples' batch size, and prime the model with the first 'prompt_length_in_seconds' seconds.# Training## VQVAE\nTo train a small vqvae, run\n\nHere, '{audio_files_dir}' is the directory in which you can put the audio files for your dataset, and '{ngpus}' is number of GPU's you want to use to train. \nThe above trains a two-level VQ-VAE with 'downs_t = (5,3)', and 'strides_t = (2, 2)' meaning we downsample the audio by '25 = 32' to get the first level of codes, and '28 = 256' to get the second level codes. \nCheckpoints are stored in the 'logs' folder. You can monitor the training by running Tensorboard## Prior### Train prior or upsamplers\nOnce the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes. \nTo train the top-level prior, we can run\n\n\n\nTo train the upsampler, we can run\n\nWe pass 'sample_length = n_ctx * downsample_of_level' so that after downsampling the tokens match the n_ctx of the prior hps. \nHere, 'n_ctx = 8192' and 'downsamples = (32, 256)', giving 'sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)' respectively for the bottom and top level.### Learning rate annealing\nTo get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest \ncheckpoint and run with### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.",
"passage: #### Train without labels\nOur pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers \ncan upsample them back to audio that sound very similar to the original audio.\nTo re-use these for a new dataset of your choice, you can retrain just the top-level \n\nTo train top-level on a new dataset, run\n\nTraining the 'small_prior' with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).\n\nNear the end of training, follow this to anneal the learning rate to 0#### Sample from new model\nYou can then run URL with the top-level of our models replaced by your new model. To do so,\n- Add an entry 'my_model=(\"vqvae\", \"upsampler_level_0\", \"upsampler_level_1\", \"small_prior\")' in 'MODELS' in 'make_models.py'. \n- Update the 'small_prior' dictionary in 'URL' to include 'restore_prior='path/to/checkpoint''. If you\nyou changed any hps directly in the command line script (eg:'heads'), make sure to update them in the dictionary too so \nthat 'make_models' restores our checkpoint correctly.\n- Run URL as outlined in the sampling section, but now with '--model=my_model' \n\nFor example, let's say we trained 'small_vqvae', 'small_prior', and 'small_upsampler' under '/path/to/jukebox/logs'. In 'make_models.py', we are going to declare a tuple of the new models as 'my_model'.\n\n\nNext, in 'URL', we add them to the registry with the corresponding 'restore_'paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where 'URL.transformer._attn_mods[layer].attn_func' is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!",
"passage: #### Train with labels \nTo train with you own metadata for your audio files, implement 'get_metadata' in 'data/files_dataset.py' to return the \n'artist', 'genre' and 'lyrics' for a given audio file. For now, you can pass '''' for lyrics to not use any lyrics.\n\nFor training with labels, we'll use 'small_labelled_prior' in 'URL', and we set 'labels=True,labels_v3=True'. \nWe use 2 kinds of labels information:\n- Artist/Genre: \n - For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id \n is that in v2, we split genres like 'blues_rock' into a bag of words '[blues, rock]', and we pass atmost \n 'max_bow_genre_size' of those, in 'v3' we consider it as a single word and just set 'max_bow_genre_size=1'.\n - Update the 'v3_artist_ids' and 'v3_genre_ids' to use ids from your new dataset. \n - In 'small_labelled_prior', set the hps 'y_bins = (number_of_genres, number_of_artists)' and 'max_bow_genre_size=1'. \n- Timing: \n - For each chunk of audio, we return the 'total_length' of the song, the 'offset' the current audio chunk is at and \n the 'sample_length' of the audio chunk. We have three timing embeddings: total_length, our current position, and our \n current position as a fraction of the total length, and we divide the range of these values into 't_bins' discrete bins. \n - In 'small_labelled_prior', set the hps 'min_duration' and 'max_duration' to be the shortest/longest duration of audio \n files you want for your dataset, and 't_bins' for how many bins you want to discretize timing information into. Note \n 'min_duration * sr' needs to be at least 'sample_length' to have an audio chunk in it.\n\nAfter these modifications, to train a top-level with labels, run\n\n\nFor sampling, follow same instructions as above but use 'small_labelled_prior' instead of 'small_prior'."
] |
44efbd5f0366ed7667c21d206527e6ef02a1a574 | This is a dataset of 10,000 interacting pairs of proteins obtained from UniProt,
and clustered using methods explain in [this blog post](https://huggingface.co/blog/AmelieSchreiber/faster-pha). Note,
cluster 0 is over-represented in this dataset and this should be considered when creating train/test splits with this
data. | AmelieSchreiber/pha_clustered_protein_complexes | [
"license:mit",
"region:us"
] | 2023-12-11T01:21:47+00:00 | {"license": "mit"} | 2023-12-16T20:18:29+00:00 | [] | [] | TAGS
#license-mit #region-us
| This is a dataset of 10,000 interacting pairs of proteins obtained from UniProt,
and clustered using methods explain in this blog post. Note,
cluster 0 is over-represented in this dataset and this should be considered when creating train/test splits with this
data. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
61046422724ce1433e621e214aba998c6df37b37 |
# Dataset Card for Evaluation run of Nexusflow/NexusRaven-V2-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Nexusflow/NexusRaven-V2-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Nexusflow/NexusRaven-V2-13B](https://huggingface.co/Nexusflow/NexusRaven-V2-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Nexusflow__NexusRaven-V2-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:31:11.054887](https://huggingface.co/datasets/open-llm-leaderboard/details_Nexusflow__NexusRaven-V2-13B/blob/main/results_2023-12-11T01-31-11.054887.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4488955038753255,
"acc_stderr": 0.0344489486364852,
"acc_norm": 0.4526483549559697,
"acc_norm_stderr": 0.035198247944632347,
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502356,
"mc2": 0.4453595923052835,
"mc2_stderr": 0.01505063472965778
},
"harness|arc:challenge|25": {
"acc": 0.4232081911262799,
"acc_stderr": 0.014438036220848039,
"acc_norm": 0.4513651877133106,
"acc_norm_stderr": 0.014542104569955264
},
"harness|hellaswag|10": {
"acc": 0.5054769966142203,
"acc_stderr": 0.004989482040610104,
"acc_norm": 0.6739693288189603,
"acc_norm_stderr": 0.004678006403691731
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.34814814814814815,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.34814814814814815,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.04060127035236397,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.04060127035236397
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4,
"acc_stderr": 0.03015113445777629,
"acc_norm": 0.4,
"acc_norm_stderr": 0.03015113445777629
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.375,
"acc_stderr": 0.04048439222695598,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04048439222695598
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3815028901734104,
"acc_stderr": 0.037038511930995194,
"acc_norm": 0.3815028901734104,
"acc_norm_stderr": 0.037038511930995194
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.18627450980392157,
"acc_stderr": 0.03873958714149354,
"acc_norm": 0.18627450980392157,
"acc_norm_stderr": 0.03873958714149354
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.40425531914893614,
"acc_stderr": 0.03208115750788684,
"acc_norm": 0.40425531914893614,
"acc_norm_stderr": 0.03208115750788684
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.04339138322579861,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.04339138322579861
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5103448275862069,
"acc_stderr": 0.04165774775728763,
"acc_norm": 0.5103448275862069,
"acc_norm_stderr": 0.04165774775728763
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.291005291005291,
"acc_stderr": 0.02339382650048487,
"acc_norm": 0.291005291005291,
"acc_norm_stderr": 0.02339382650048487
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30158730158730157,
"acc_stderr": 0.04104947269903394,
"acc_norm": 0.30158730158730157,
"acc_norm_stderr": 0.04104947269903394
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4483870967741935,
"acc_stderr": 0.02829205683011273,
"acc_norm": 0.4483870967741935,
"acc_norm_stderr": 0.02829205683011273
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.32019704433497537,
"acc_stderr": 0.032826493853041504,
"acc_norm": 0.32019704433497537,
"acc_norm_stderr": 0.032826493853041504
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5515151515151515,
"acc_stderr": 0.038835659779569286,
"acc_norm": 0.5515151515151515,
"acc_norm_stderr": 0.038835659779569286
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5757575757575758,
"acc_stderr": 0.035212249088415866,
"acc_norm": 0.5757575757575758,
"acc_norm_stderr": 0.035212249088415866
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5958549222797928,
"acc_stderr": 0.03541508578884021,
"acc_norm": 0.5958549222797928,
"acc_norm_stderr": 0.03541508578884021
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3717948717948718,
"acc_stderr": 0.024503472557110943,
"acc_norm": 0.3717948717948718,
"acc_norm_stderr": 0.024503472557110943
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.02671924078371217,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.02671924078371217
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.39915966386554624,
"acc_stderr": 0.031811100324139245,
"acc_norm": 0.39915966386554624,
"acc_norm_stderr": 0.031811100324139245
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.03734535676787198,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.03734535676787198
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5614678899082569,
"acc_stderr": 0.021274713073954572,
"acc_norm": 0.5614678899082569,
"acc_norm_stderr": 0.021274713073954572
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.03141554629402544,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.03141554629402544
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.034849415144292316,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.034849415144292316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5991561181434599,
"acc_stderr": 0.031900803894732356,
"acc_norm": 0.5991561181434599,
"acc_norm_stderr": 0.031900803894732356
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5201793721973094,
"acc_stderr": 0.033530461674123,
"acc_norm": 0.5201793721973094,
"acc_norm_stderr": 0.033530461674123
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.4580152671755725,
"acc_stderr": 0.04369802690578757,
"acc_norm": 0.4580152671755725,
"acc_norm_stderr": 0.04369802690578757
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6198347107438017,
"acc_stderr": 0.04431324501968432,
"acc_norm": 0.6198347107438017,
"acc_norm_stderr": 0.04431324501968432
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.04826217294139894,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.04826217294139894
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5644171779141104,
"acc_stderr": 0.03895632464138937,
"acc_norm": 0.5644171779141104,
"acc_norm_stderr": 0.03895632464138937
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.04635550135609976,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.04635550135609976
},
"harness|hendrycksTest-management|5": {
"acc": 0.5631067961165048,
"acc_stderr": 0.04911147107365777,
"acc_norm": 0.5631067961165048,
"acc_norm_stderr": 0.04911147107365777
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.02934311479809446,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.02934311479809446
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5823754789272031,
"acc_stderr": 0.017635637326951514,
"acc_norm": 0.5823754789272031,
"acc_norm_stderr": 0.017635637326951514
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.476878612716763,
"acc_stderr": 0.026890297881303125,
"acc_norm": 0.476878612716763,
"acc_norm_stderr": 0.026890297881303125
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2994413407821229,
"acc_stderr": 0.015318257745976708,
"acc_norm": 0.2994413407821229,
"acc_norm_stderr": 0.015318257745976708
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.45751633986928103,
"acc_stderr": 0.028526383452142638,
"acc_norm": 0.45751633986928103,
"acc_norm_stderr": 0.028526383452142638
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4887459807073955,
"acc_stderr": 0.02839089739686352,
"acc_norm": 0.4887459807073955,
"acc_norm_stderr": 0.02839089739686352
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5030864197530864,
"acc_stderr": 0.02782021415859437,
"acc_norm": 0.5030864197530864,
"acc_norm_stderr": 0.02782021415859437
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.028473501272963764,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.028473501272963764
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3220338983050847,
"acc_stderr": 0.01193393607189109,
"acc_norm": 0.3220338983050847,
"acc_norm_stderr": 0.01193393607189109
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.29044117647058826,
"acc_stderr": 0.027576468622740533,
"acc_norm": 0.29044117647058826,
"acc_norm_stderr": 0.027576468622740533
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.43790849673202614,
"acc_stderr": 0.020071257886886525,
"acc_norm": 0.43790849673202614,
"acc_norm_stderr": 0.020071257886886525
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5181818181818182,
"acc_stderr": 0.04785964010794916,
"acc_norm": 0.5181818181818182,
"acc_norm_stderr": 0.04785964010794916
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5387755102040817,
"acc_stderr": 0.031912820526692774,
"acc_norm": 0.5387755102040817,
"acc_norm_stderr": 0.031912820526692774
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5323383084577115,
"acc_stderr": 0.03528131472933607,
"acc_norm": 0.5323383084577115,
"acc_norm_stderr": 0.03528131472933607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-virology|5": {
"acc": 0.39759036144578314,
"acc_stderr": 0.03809973084540219,
"acc_norm": 0.39759036144578314,
"acc_norm_stderr": 0.03809973084540219
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5789473684210527,
"acc_stderr": 0.03786720706234214,
"acc_norm": 0.5789473684210527,
"acc_norm_stderr": 0.03786720706234214
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502356,
"mc2": 0.4453595923052835,
"mc2_stderr": 0.01505063472965778
},
"harness|winogrande|5": {
"acc": 0.6637726913970008,
"acc_stderr": 0.01327728659399345
},
"harness|gsm8k|5": {
"acc": 0.2092494313874147,
"acc_stderr": 0.011204544210483803
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Nexusflow__NexusRaven-V2-13B | [
"region:us"
] | 2023-12-11T01:34:05+00:00 | {"pretty_name": "Evaluation run of Nexusflow/NexusRaven-V2-13B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Nexusflow/NexusRaven-V2-13B](https://huggingface.co/Nexusflow/NexusRaven-V2-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Nexusflow__NexusRaven-V2-13B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T01:31:11.054887](https://huggingface.co/datasets/open-llm-leaderboard/details_Nexusflow__NexusRaven-V2-13B/blob/main/results_2023-12-11T01-31-11.054887.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4488955038753255,\n \"acc_stderr\": 0.0344489486364852,\n \"acc_norm\": 0.4526483549559697,\n \"acc_norm_stderr\": 0.035198247944632347,\n \"mc1\": 0.2864137086903305,\n \"mc1_stderr\": 0.015826142439502356,\n \"mc2\": 0.4453595923052835,\n \"mc2_stderr\": 0.01505063472965778\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4232081911262799,\n \"acc_stderr\": 0.014438036220848039,\n \"acc_norm\": 0.4513651877133106,\n \"acc_norm_stderr\": 0.014542104569955264\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5054769966142203,\n \"acc_stderr\": 0.004989482040610104,\n \"acc_norm\": 0.6739693288189603,\n \"acc_norm_stderr\": 0.004678006403691731\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.34814814814814815,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.04060127035236397,\n \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.04060127035236397\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.03015113445777629,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.03015113445777629\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.375,\n \"acc_stderr\": 0.04048439222695598,\n \"acc_norm\": 0.375,\n \"acc_norm_stderr\": 0.04048439222695598\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3815028901734104,\n \"acc_stderr\": 0.037038511930995194,\n \"acc_norm\": 0.3815028901734104,\n \"acc_norm_stderr\": 0.037038511930995194\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.03873958714149354,\n \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.03873958714149354\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.40425531914893614,\n \"acc_stderr\": 0.03208115750788684,\n \"acc_norm\": 0.40425531914893614,\n \"acc_norm_stderr\": 0.03208115750788684\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.04339138322579861,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.04339138322579861\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728763,\n \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728763\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.291005291005291,\n \"acc_stderr\": 0.02339382650048487,\n \"acc_norm\": 0.291005291005291,\n \"acc_norm_stderr\": 0.02339382650048487\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30158730158730157,\n \"acc_stderr\": 0.04104947269903394,\n \"acc_norm\": 0.30158730158730157,\n \"acc_norm_stderr\": 0.04104947269903394\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4483870967741935,\n \"acc_stderr\": 0.02829205683011273,\n \"acc_norm\": 0.4483870967741935,\n \"acc_norm_stderr\": 0.02829205683011273\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.32019704433497537,\n \"acc_stderr\": 0.032826493853041504,\n \"acc_norm\": 0.32019704433497537,\n \"acc_norm_stderr\": 0.032826493853041504\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5515151515151515,\n \"acc_stderr\": 0.038835659779569286,\n \"acc_norm\": 0.5515151515151515,\n \"acc_norm_stderr\": 0.038835659779569286\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5757575757575758,\n \"acc_stderr\": 0.035212249088415866,\n \"acc_norm\": 0.5757575757575758,\n \"acc_norm_stderr\": 0.035212249088415866\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5958549222797928,\n \"acc_stderr\": 0.03541508578884021,\n \"acc_norm\": 0.5958549222797928,\n \"acc_norm_stderr\": 0.03541508578884021\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3717948717948718,\n \"acc_stderr\": 0.024503472557110943,\n \"acc_norm\": 0.3717948717948718,\n \"acc_norm_stderr\": 0.024503472557110943\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.02671924078371217,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.02671924078371217\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.39915966386554624,\n \"acc_stderr\": 0.031811100324139245,\n \"acc_norm\": 0.39915966386554624,\n \"acc_norm_stderr\": 0.031811100324139245\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.03734535676787198,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.03734535676787198\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5614678899082569,\n \"acc_stderr\": 0.021274713073954572,\n \"acc_norm\": 0.5614678899082569,\n \"acc_norm_stderr\": 0.021274713073954572\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3055555555555556,\n \"acc_stderr\": 0.03141554629402544,\n \"acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.03141554629402544\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.034849415144292316,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.034849415144292316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5991561181434599,\n \"acc_stderr\": 0.031900803894732356,\n \"acc_norm\": 0.5991561181434599,\n \"acc_norm_stderr\": 0.031900803894732356\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5201793721973094,\n \"acc_stderr\": 0.033530461674123,\n \"acc_norm\": 0.5201793721973094,\n \"acc_norm_stderr\": 0.033530461674123\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.4580152671755725,\n \"acc_stderr\": 0.04369802690578757,\n \"acc_norm\": 0.4580152671755725,\n \"acc_norm_stderr\": 0.04369802690578757\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6198347107438017,\n \"acc_stderr\": 0.04431324501968432,\n \"acc_norm\": 0.6198347107438017,\n \"acc_norm_stderr\": 0.04431324501968432\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.04826217294139894,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.04826217294139894\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5644171779141104,\n \"acc_stderr\": 0.03895632464138937,\n \"acc_norm\": 0.5644171779141104,\n \"acc_norm_stderr\": 0.03895632464138937\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.5631067961165048,\n \"acc_stderr\": 0.04911147107365777,\n \"acc_norm\": 0.5631067961165048,\n \"acc_norm_stderr\": 0.04911147107365777\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.02934311479809446,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.02934311479809446\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5823754789272031,\n \"acc_stderr\": 0.017635637326951514,\n \"acc_norm\": 0.5823754789272031,\n \"acc_norm_stderr\": 0.017635637326951514\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.476878612716763,\n \"acc_stderr\": 0.026890297881303125,\n \"acc_norm\": 0.476878612716763,\n \"acc_norm_stderr\": 0.026890297881303125\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2994413407821229,\n \"acc_stderr\": 0.015318257745976708,\n \"acc_norm\": 0.2994413407821229,\n \"acc_norm_stderr\": 0.015318257745976708\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.45751633986928103,\n \"acc_stderr\": 0.028526383452142638,\n \"acc_norm\": 0.45751633986928103,\n \"acc_norm_stderr\": 0.028526383452142638\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4887459807073955,\n \"acc_stderr\": 0.02839089739686352,\n \"acc_norm\": 0.4887459807073955,\n \"acc_norm_stderr\": 0.02839089739686352\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5030864197530864,\n \"acc_stderr\": 0.02782021415859437,\n \"acc_norm\": 0.5030864197530864,\n \"acc_norm_stderr\": 0.02782021415859437\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.35106382978723405,\n \"acc_stderr\": 0.028473501272963764,\n \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.028473501272963764\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3220338983050847,\n \"acc_stderr\": 0.01193393607189109,\n \"acc_norm\": 0.3220338983050847,\n \"acc_norm_stderr\": 0.01193393607189109\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.29044117647058826,\n \"acc_stderr\": 0.027576468622740533,\n \"acc_norm\": 0.29044117647058826,\n \"acc_norm_stderr\": 0.027576468622740533\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.43790849673202614,\n \"acc_stderr\": 0.020071257886886525,\n \"acc_norm\": 0.43790849673202614,\n \"acc_norm_stderr\": 0.020071257886886525\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5181818181818182,\n \"acc_stderr\": 0.04785964010794916,\n \"acc_norm\": 0.5181818181818182,\n \"acc_norm_stderr\": 0.04785964010794916\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5387755102040817,\n \"acc_stderr\": 0.031912820526692774,\n \"acc_norm\": 0.5387755102040817,\n \"acc_norm_stderr\": 0.031912820526692774\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5323383084577115,\n \"acc_stderr\": 0.03528131472933607,\n \"acc_norm\": 0.5323383084577115,\n \"acc_norm_stderr\": 0.03528131472933607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.39759036144578314,\n \"acc_stderr\": 0.03809973084540219,\n \"acc_norm\": 0.39759036144578314,\n \"acc_norm_stderr\": 0.03809973084540219\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.03786720706234214,\n \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.03786720706234214\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2864137086903305,\n \"mc1_stderr\": 0.015826142439502356,\n \"mc2\": 0.4453595923052835,\n \"mc2_stderr\": 0.01505063472965778\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6637726913970008,\n \"acc_stderr\": 0.01327728659399345\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2092494313874147,\n \"acc_stderr\": 0.011204544210483803\n }\n}\n```", "repo_url": "https://huggingface.co/Nexusflow/NexusRaven-V2-13B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-31-11.054887.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["**/details_harness|winogrande|5_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T01-31-11.054887.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T01_31_11.054887", "path": ["results_2023-12-11T01-31-11.054887.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T01-31-11.054887.parquet"]}]}]} | 2023-12-11T01:34:45+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Nexusflow/NexusRaven-V2-13B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Nexusflow/NexusRaven-V2-13B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T01:31:11.054887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Nexusflow/NexusRaven-V2-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Nexusflow/NexusRaven-V2-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:31:11.054887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Nexusflow/NexusRaven-V2-13B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Nexusflow/NexusRaven-V2-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:31:11.054887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
21,
31,
170,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Nexusflow/NexusRaven-V2-13B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Nexusflow/NexusRaven-V2-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T01:31:11.054887(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
53ee0fcafcd2a40de6ae1dfb9beb3f5fedd8b442 |
# Dataset Card for Evaluation run of Fredithefish/OpenZephyrChat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Fredithefish/OpenZephyrChat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Fredithefish/OpenZephyrChat](https://huggingface.co/Fredithefish/OpenZephyrChat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Fredithefish__OpenZephyrChat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:34:50.258646](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__OpenZephyrChat/blob/main/results_2023-12-11T01-34-50.258646.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6516436356967714,
"acc_stderr": 0.03194890805986729,
"acc_norm": 0.6526191805877333,
"acc_norm_stderr": 0.03259872006364224,
"mc1": 0.3268053855569155,
"mc1_stderr": 0.01641987473113503,
"mc2": 0.4824339066763302,
"mc2_stderr": 0.015163614263653211
},
"harness|arc:challenge|25": {
"acc": 0.6203071672354948,
"acc_stderr": 0.014182119866974872,
"acc_norm": 0.6484641638225256,
"acc_norm_stderr": 0.013952413699600933
},
"harness|hellaswag|10": {
"acc": 0.6681935869348735,
"acc_stderr": 0.004698995789478832,
"acc_norm": 0.8508265285799641,
"acc_norm_stderr": 0.0035553128780523914
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7916666666666666,
"acc_stderr": 0.03396116205845333,
"acc_norm": 0.7916666666666666,
"acc_norm_stderr": 0.03396116205845333
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736411,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736411
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.025305906241590632,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.025305906241590632
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7935483870967742,
"acc_stderr": 0.023025899617188723,
"acc_norm": 0.7935483870967742,
"acc_norm_stderr": 0.023025899617188723
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.031922715695483,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.031922715695483
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229862,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229862
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.02247325333276877,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.02247325333276877
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.02938162072646507,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.02938162072646507
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8568807339449541,
"acc_stderr": 0.015014462497168589,
"acc_norm": 0.8568807339449541,
"acc_norm_stderr": 0.015014462497168589
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553353,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553353
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8396624472573839,
"acc_stderr": 0.02388438092596567,
"acc_norm": 0.8396624472573839,
"acc_norm_stderr": 0.02388438092596567
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7174887892376681,
"acc_stderr": 0.03021683101150877,
"acc_norm": 0.7174887892376681,
"acc_norm_stderr": 0.03021683101150877
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098822,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098822
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128138,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128138
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903335,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903335
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7485549132947977,
"acc_stderr": 0.023357365785874037,
"acc_norm": 0.7485549132947977,
"acc_norm_stderr": 0.023357365785874037
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38994413407821227,
"acc_stderr": 0.016312376629213067,
"acc_norm": 0.38994413407821227,
"acc_norm_stderr": 0.016312376629213067
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.025553169991826524,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.025553169991826524
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.025403832978179615,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.025403832978179615
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47327249022164275,
"acc_stderr": 0.012751977967676008,
"acc_norm": 0.47327249022164275,
"acc_norm_stderr": 0.012751977967676008
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.028739328513983572,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.028739328513983572
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.018950886770806315,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.018950886770806315
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128445,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128445
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445412,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445412
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3268053855569155,
"mc1_stderr": 0.01641987473113503,
"mc2": 0.4824339066763302,
"mc2_stderr": 0.015163614263653211
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989245
},
"harness|gsm8k|5": {
"acc": 0.6459438968915845,
"acc_stderr": 0.013172728385222576
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Fredithefish__OpenZephyrChat | [
"region:us"
] | 2023-12-11T01:36:14+00:00 | {"pretty_name": "Evaluation run of Fredithefish/OpenZephyrChat", "dataset_summary": "Dataset automatically created during the evaluation run of model [Fredithefish/OpenZephyrChat](https://huggingface.co/Fredithefish/OpenZephyrChat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Fredithefish__OpenZephyrChat\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T01:34:50.258646](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__OpenZephyrChat/blob/main/results_2023-12-11T01-34-50.258646.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6516436356967714,\n \"acc_stderr\": 0.03194890805986729,\n \"acc_norm\": 0.6526191805877333,\n \"acc_norm_stderr\": 0.03259872006364224,\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4824339066763302,\n \"mc2_stderr\": 0.015163614263653211\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6203071672354948,\n \"acc_stderr\": 0.014182119866974872,\n \"acc_norm\": 0.6484641638225256,\n \"acc_norm_stderr\": 0.013952413699600933\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6681935869348735,\n \"acc_stderr\": 0.004698995789478832,\n \"acc_norm\": 0.8508265285799641,\n \"acc_norm_stderr\": 0.0035553128780523914\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880267,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880267\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n \"acc_stderr\": 0.03396116205845333,\n \"acc_norm\": 0.7916666666666666,\n \"acc_norm_stderr\": 0.03396116205845333\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736411,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736411\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.025305906241590632,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.025305906241590632\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n \"acc_stderr\": 0.023025899617188723,\n \"acc_norm\": 0.7935483870967742,\n \"acc_norm_stderr\": 0.023025899617188723\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.031922715695483,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.031922715695483\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229862,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229862\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.02247325333276877,\n \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.02247325333276877\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.02938162072646507,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.02938162072646507\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8568807339449541,\n \"acc_stderr\": 0.015014462497168589,\n \"acc_norm\": 0.8568807339449541,\n \"acc_norm_stderr\": 0.015014462497168589\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553353,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553353\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8396624472573839,\n \"acc_stderr\": 0.02388438092596567,\n \"acc_norm\": 0.8396624472573839,\n \"acc_norm_stderr\": 0.02388438092596567\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7174887892376681,\n \"acc_stderr\": 0.03021683101150877,\n \"acc_norm\": 0.7174887892376681,\n \"acc_norm_stderr\": 0.03021683101150877\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098822,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098822\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.02158649400128138,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.02158649400128138\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n \"acc_stderr\": 0.013586619219903335,\n \"acc_norm\": 0.8250319284802043,\n \"acc_norm_stderr\": 0.013586619219903335\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7485549132947977,\n \"acc_stderr\": 0.023357365785874037,\n \"acc_norm\": 0.7485549132947977,\n \"acc_norm_stderr\": 0.023357365785874037\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38994413407821227,\n \"acc_stderr\": 0.016312376629213067,\n \"acc_norm\": 0.38994413407821227,\n \"acc_norm_stderr\": 0.016312376629213067\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.025553169991826524,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.025553169991826524\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.025403832978179615,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.025403832978179615\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47327249022164275,\n \"acc_stderr\": 0.012751977967676008,\n \"acc_norm\": 0.47327249022164275,\n \"acc_norm_stderr\": 0.012751977967676008\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.028739328513983572,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.028739328513983572\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.018950886770806315,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.018950886770806315\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128445,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128445\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.02619392354445412,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.02619392354445412\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4824339066763302,\n \"mc2_stderr\": 0.015163614263653211\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989245\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6459438968915845,\n \"acc_stderr\": 0.013172728385222576\n }\n}\n```", "repo_url": "https://huggingface.co/Fredithefish/OpenZephyrChat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-33-25.847217.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-34-50.258646.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["**/details_harness|winogrande|5_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["**/details_harness|winogrande|5_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T01-34-50.258646.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T01_33_25.847217", "path": ["results_2023-12-11T01-33-25.847217.parquet"]}, {"split": "2023_12_11T01_34_50.258646", "path": ["results_2023-12-11T01-34-50.258646.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T01-34-50.258646.parquet"]}]}]} | 2023-12-11T01:37:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Fredithefish/OpenZephyrChat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Fredithefish/OpenZephyrChat on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T01:34:50.258646(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Fredithefish/OpenZephyrChat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Fredithefish/OpenZephyrChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:34:50.258646(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Fredithefish/OpenZephyrChat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Fredithefish/OpenZephyrChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:34:50.258646(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Fredithefish/OpenZephyrChat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Fredithefish/OpenZephyrChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T01:34:50.258646(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
da86a3320a44358fb8a5a4f7d8b735fb5a1192b0 |
# Dataset Card for Evaluation run of mncai/yi-34B-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mncai/yi-34B-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mncai/yi-34B-v3](https://huggingface.co/mncai/yi-34B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mncai__yi-34B-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T01:51:08.694143](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__yi-34B-v3/blob/main/results_2023-12-11T01-51-08.694143.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7536948044621744,
"acc_stderr": 0.028378789321173548,
"acc_norm": 0.7581198984934292,
"acc_norm_stderr": 0.02891498378900509,
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5753679426280454,
"mc2_stderr": 0.014962842073717312
},
"harness|arc:challenge|25": {
"acc": 0.6390784982935154,
"acc_stderr": 0.014034761386175452,
"acc_norm": 0.6706484641638225,
"acc_norm_stderr": 0.013734057652635476
},
"harness|hellaswag|10": {
"acc": 0.6487751443935471,
"acc_stderr": 0.004763774981834676,
"acc_norm": 0.8511252738498307,
"acc_norm_stderr": 0.0035523745313052004
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6962962962962963,
"acc_stderr": 0.03972552884785137,
"acc_norm": 0.6962962962962963,
"acc_norm_stderr": 0.03972552884785137
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474935,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474935
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8188679245283019,
"acc_stderr": 0.023702963526757798,
"acc_norm": 0.8188679245283019,
"acc_norm_stderr": 0.023702963526757798
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9097222222222222,
"acc_stderr": 0.023964965777906935,
"acc_norm": 0.9097222222222222,
"acc_norm_stderr": 0.023964965777906935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.034355680560478746,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.034355680560478746
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7787234042553192,
"acc_stderr": 0.027136349602424056,
"acc_norm": 0.7787234042553192,
"acc_norm_stderr": 0.027136349602424056
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5701754385964912,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.5701754385964912,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7379310344827587,
"acc_stderr": 0.036646663372252565,
"acc_norm": 0.7379310344827587,
"acc_norm_stderr": 0.036646663372252565
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6957671957671958,
"acc_stderr": 0.02369541500946309,
"acc_norm": 0.6957671957671958,
"acc_norm_stderr": 0.02369541500946309
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562427,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562427
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9,
"acc_stderr": 0.017066403719657255,
"acc_norm": 0.9,
"acc_norm_stderr": 0.017066403719657255
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6699507389162561,
"acc_stderr": 0.033085304262282574,
"acc_norm": 0.6699507389162561,
"acc_norm_stderr": 0.033085304262282574
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9343434343434344,
"acc_stderr": 0.01764652667723333,
"acc_norm": 0.9343434343434344,
"acc_norm_stderr": 0.01764652667723333
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9740932642487047,
"acc_stderr": 0.01146452335695318,
"acc_norm": 0.9740932642487047,
"acc_norm_stderr": 0.01146452335695318
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.019457390787681803,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.019457390787681803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4111111111111111,
"acc_stderr": 0.02999992350870669,
"acc_norm": 0.4111111111111111,
"acc_norm_stderr": 0.02999992350870669
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.023005459446673964,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.023005459446673964
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4966887417218543,
"acc_stderr": 0.04082393379449654,
"acc_norm": 0.4966887417218543,
"acc_norm_stderr": 0.04082393379449654
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9284403669724771,
"acc_stderr": 0.01105125524781546,
"acc_norm": 0.9284403669724771,
"acc_norm_stderr": 0.01105125524781546
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6620370370370371,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.6620370370370371,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969426994,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969426994
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9156118143459916,
"acc_stderr": 0.01809424711647332,
"acc_norm": 0.9156118143459916,
"acc_norm_stderr": 0.01809424711647332
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8026905829596412,
"acc_stderr": 0.02670985334496796,
"acc_norm": 0.8026905829596412,
"acc_norm_stderr": 0.02670985334496796
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9008264462809917,
"acc_stderr": 0.02728524631275896,
"acc_norm": 0.9008264462809917,
"acc_norm_stderr": 0.02728524631275896
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.029239272675632748,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.029239272675632748
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8650306748466258,
"acc_stderr": 0.026845765054553855,
"acc_norm": 0.8650306748466258,
"acc_norm_stderr": 0.026845765054553855
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.8446601941747572,
"acc_stderr": 0.03586594738573974,
"acc_norm": 0.8446601941747572,
"acc_norm_stderr": 0.03586594738573974
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253862,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253862
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9029374201787995,
"acc_stderr": 0.010586474712018283,
"acc_norm": 0.9029374201787995,
"acc_norm_stderr": 0.010586474712018283
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8121387283236994,
"acc_stderr": 0.02102926975242323,
"acc_norm": 0.8121387283236994,
"acc_norm_stderr": 0.02102926975242323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7039106145251397,
"acc_stderr": 0.015268677317602274,
"acc_norm": 0.7039106145251397,
"acc_norm_stderr": 0.015268677317602274
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8398692810457516,
"acc_stderr": 0.020998740930362303,
"acc_norm": 0.8398692810457516,
"acc_norm_stderr": 0.020998740930362303
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8167202572347267,
"acc_stderr": 0.021974198848265812,
"acc_norm": 0.8167202572347267,
"acc_norm_stderr": 0.021974198848265812
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8641975308641975,
"acc_stderr": 0.0190615881815054,
"acc_norm": 0.8641975308641975,
"acc_norm_stderr": 0.0190615881815054
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6276595744680851,
"acc_stderr": 0.02883892147125145,
"acc_norm": 0.6276595744680851,
"acc_norm_stderr": 0.02883892147125145
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5971316818774446,
"acc_stderr": 0.01252695557711801,
"acc_norm": 0.5971316818774446,
"acc_norm_stderr": 0.01252695557711801
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8345588235294118,
"acc_stderr": 0.02257177102549474,
"acc_norm": 0.8345588235294118,
"acc_norm_stderr": 0.02257177102549474
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.015643069911273344,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.015643069911273344
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.04309118709946458,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.04309118709946458
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8489795918367347,
"acc_stderr": 0.022923004094736844,
"acc_norm": 0.8489795918367347,
"acc_norm_stderr": 0.022923004094736844
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101706,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101706
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5903614457831325,
"acc_stderr": 0.038284011150790206,
"acc_norm": 0.5903614457831325,
"acc_norm_stderr": 0.038284011150790206
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5753679426280454,
"mc2_stderr": 0.014962842073717312
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237419
},
"harness|gsm8k|5": {
"acc": 0.645185746777862,
"acc_stderr": 0.013179083387979214
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_mncai__yi-34B-v3 | [
"region:us"
] | 2023-12-11T01:53:57+00:00 | {"pretty_name": "Evaluation run of mncai/yi-34B-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [mncai/yi-34B-v3](https://huggingface.co/mncai/yi-34B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mncai__yi-34B-v3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T01:51:08.694143](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__yi-34B-v3/blob/main/results_2023-12-11T01-51-08.694143.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7536948044621744,\n \"acc_stderr\": 0.028378789321173548,\n \"acc_norm\": 0.7581198984934292,\n \"acc_norm_stderr\": 0.02891498378900509,\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5753679426280454,\n \"mc2_stderr\": 0.014962842073717312\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6390784982935154,\n \"acc_stderr\": 0.014034761386175452,\n \"acc_norm\": 0.6706484641638225,\n \"acc_norm_stderr\": 0.013734057652635476\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6487751443935471,\n \"acc_stderr\": 0.004763774981834676,\n \"acc_norm\": 0.8511252738498307,\n \"acc_norm_stderr\": 0.0035523745313052004\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n \"acc_stderr\": 0.03972552884785137,\n \"acc_norm\": 0.6962962962962963,\n \"acc_norm_stderr\": 0.03972552884785137\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474935,\n \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474935\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8188679245283019,\n \"acc_stderr\": 0.023702963526757798,\n \"acc_norm\": 0.8188679245283019,\n \"acc_norm_stderr\": 0.023702963526757798\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9097222222222222,\n \"acc_stderr\": 0.023964965777906935,\n \"acc_norm\": 0.9097222222222222,\n \"acc_norm_stderr\": 0.023964965777906935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.034355680560478746,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.034355680560478746\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7787234042553192,\n \"acc_stderr\": 0.027136349602424056,\n \"acc_norm\": 0.7787234042553192,\n \"acc_norm_stderr\": 0.027136349602424056\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5701754385964912,\n \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.5701754385964912,\n \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7379310344827587,\n \"acc_stderr\": 0.036646663372252565,\n \"acc_norm\": 0.7379310344827587,\n \"acc_norm_stderr\": 0.036646663372252565\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6957671957671958,\n \"acc_stderr\": 0.02369541500946309,\n \"acc_norm\": 0.6957671957671958,\n \"acc_norm_stderr\": 0.02369541500946309\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562427,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562427\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.017066403719657255,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.017066403719657255\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6699507389162561,\n \"acc_stderr\": 0.033085304262282574,\n \"acc_norm\": 0.6699507389162561,\n \"acc_norm_stderr\": 0.033085304262282574\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9343434343434344,\n \"acc_stderr\": 0.01764652667723333,\n \"acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.01764652667723333\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9740932642487047,\n \"acc_stderr\": 0.01146452335695318,\n \"acc_norm\": 0.9740932642487047,\n \"acc_norm_stderr\": 0.01146452335695318\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.019457390787681803,\n \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.019457390787681803\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.4111111111111111,\n \"acc_stderr\": 0.02999992350870669,\n \"acc_norm\": 0.4111111111111111,\n \"acc_norm_stderr\": 0.02999992350870669\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.023005459446673964,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.023005459446673964\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4966887417218543,\n \"acc_stderr\": 0.04082393379449654,\n \"acc_norm\": 0.4966887417218543,\n \"acc_norm_stderr\": 0.04082393379449654\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9284403669724771,\n \"acc_stderr\": 0.01105125524781546,\n \"acc_norm\": 0.9284403669724771,\n \"acc_norm_stderr\": 0.01105125524781546\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6620370370370371,\n \"acc_stderr\": 0.03225941352631295,\n \"acc_norm\": 0.6620370370370371,\n \"acc_norm_stderr\": 0.03225941352631295\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9068627450980392,\n \"acc_stderr\": 0.020397853969426994,\n \"acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969426994\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9156118143459916,\n \"acc_stderr\": 0.01809424711647332,\n \"acc_norm\": 0.9156118143459916,\n \"acc_norm_stderr\": 0.01809424711647332\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8026905829596412,\n \"acc_stderr\": 0.02670985334496796,\n \"acc_norm\": 0.8026905829596412,\n \"acc_norm_stderr\": 0.02670985334496796\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.9008264462809917,\n \"acc_stderr\": 0.02728524631275896,\n \"acc_norm\": 0.9008264462809917,\n \"acc_norm_stderr\": 0.02728524631275896\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n \"acc_stderr\": 0.029239272675632748,\n \"acc_norm\": 0.8981481481481481,\n \"acc_norm_stderr\": 0.029239272675632748\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8650306748466258,\n \"acc_stderr\": 0.026845765054553855,\n \"acc_norm\": 0.8650306748466258,\n \"acc_norm_stderr\": 0.026845765054553855\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n \"acc_stderr\": 0.015537514263253862,\n \"acc_norm\": 0.9401709401709402,\n \"acc_norm_stderr\": 0.015537514263253862\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9029374201787995,\n \"acc_stderr\": 0.010586474712018283,\n \"acc_norm\": 0.9029374201787995,\n \"acc_norm_stderr\": 0.010586474712018283\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.02102926975242323,\n \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.02102926975242323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7039106145251397,\n \"acc_stderr\": 0.015268677317602274,\n \"acc_norm\": 0.7039106145251397,\n \"acc_norm_stderr\": 0.015268677317602274\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8398692810457516,\n \"acc_stderr\": 0.020998740930362303,\n \"acc_norm\": 0.8398692810457516,\n \"acc_norm_stderr\": 0.020998740930362303\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8167202572347267,\n \"acc_stderr\": 0.021974198848265812,\n \"acc_norm\": 0.8167202572347267,\n \"acc_norm_stderr\": 0.021974198848265812\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8641975308641975,\n \"acc_stderr\": 0.0190615881815054,\n \"acc_norm\": 0.8641975308641975,\n \"acc_norm_stderr\": 0.0190615881815054\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6276595744680851,\n \"acc_stderr\": 0.02883892147125145,\n \"acc_norm\": 0.6276595744680851,\n \"acc_norm_stderr\": 0.02883892147125145\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5971316818774446,\n \"acc_stderr\": 0.01252695557711801,\n \"acc_norm\": 0.5971316818774446,\n \"acc_norm_stderr\": 0.01252695557711801\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8345588235294118,\n \"acc_stderr\": 0.02257177102549474,\n \"acc_norm\": 0.8345588235294118,\n \"acc_norm_stderr\": 0.02257177102549474\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8169934640522876,\n \"acc_stderr\": 0.015643069911273344,\n \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.015643069911273344\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8489795918367347,\n \"acc_stderr\": 0.022923004094736844,\n \"acc_norm\": 0.8489795918367347,\n \"acc_norm_stderr\": 0.022923004094736844\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5903614457831325,\n \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.5903614457831325,\n \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5753679426280454,\n \"mc2_stderr\": 0.014962842073717312\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237419\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.645185746777862,\n \"acc_stderr\": 0.013179083387979214\n }\n}\n```", "repo_url": "https://huggingface.co/mncai/yi-34B-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["**/details_harness|winogrande|5_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T01-51-08.694143.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T01_51_08.694143", "path": ["results_2023-12-11T01-51-08.694143.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T01-51-08.694143.parquet"]}]}]} | 2023-12-11T01:54:36+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mncai/yi-34B-v3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mncai/yi-34B-v3 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T01:51:08.694143(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of mncai/yi-34B-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/yi-34B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:51:08.694143(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mncai/yi-34B-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/yi-34B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T01:51:08.694143(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mncai/yi-34B-v3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/yi-34B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T01:51:08.694143(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
f2163723c8b70a4db865de1ae26232478c7e78ab |
### Description
\~3.4k DPO pairs, generated by [Iambe](https://huggingface.co/athirdpath/Iambe-20b-DARE-v2-GGUF) feat. GPT-4 (~10% GPT-4, ~80% Iambe @ q5_k_m / ~10% Iambe @ q6_k) with temp 1.2 and min_p 0.15.
They are shuffled this time, as I was not aware that TRL did not do that automatically until I could see the shifts in the dataset mirrored in the loss patterns.
Iambe is a smart girl, so both the chosen and rejected for each pair are generated at the same time from a single two part prompt (not the one in the dataset). Only a few dozen failed to generate the rejected response, and in those cases I filled in the rejected output with a standard "as an AI" style refusal. The way I set things up caused any prompt formatting errors to automatically go into the REJECTED field, hopefully discouraging such behavior.
The dataset is mostly intended for ERP, so erotic situations are an outsized portion of the dataset. However, it isn't all sexual, with sport, adventure, etc content to provide a baseline.
### Downsides and Weaknesses
This dataset has not been manually cleaned, besides some basic search-and-replace. This dataset has been overseen in bursts (such content passed muster or the entire batch was rejected) but much of the content has yet to be read by a human. Some chosen responses have the AI acting for the user, I've removed what I've seen but this is a lot to manually filter. Despite my best efforts to prompt away from it, some rejected responses are... what seems to be... let's say, in a certain voice? I really tried to avoid it, and most of it was removed.
### Goals
This dataset is intended to be used to produce a BETTER Iambe, that can itself produce even more data of higher quality. Bootstraps to the moon, baby! So, while this dataset still has rough points, I feel like it's worth uploading and starting to train the model. I also want to combine data generated by more models, as to not just amplify what is already Iambe's signal.
### Bonus
I've also added the notebook I used to train [athirdpath/Iambe-20b-v3_TEST-RP_cDPO](https://huggingface.co/athirdpath/Iambe-20b-v3_TEST-RP_cDPO) on runpod, adapted from a Kaggle by [@maximelabonne](https://twitter.com/maximelabonne) (thank you!) | athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED | [
"language:en",
"license:cc-by-nc-4.0",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T02:43:02+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]} | 2023-12-11T02:44:37+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc-by-nc-4.0 #not-for-all-audiences #region-us
|
### Description
\~3.4k DPO pairs, generated by Iambe feat. GPT-4 (~10% GPT-4, ~80% Iambe @ q5_k_m / ~10% Iambe @ q6_k) with temp 1.2 and min_p 0.15.
They are shuffled this time, as I was not aware that TRL did not do that automatically until I could see the shifts in the dataset mirrored in the loss patterns.
Iambe is a smart girl, so both the chosen and rejected for each pair are generated at the same time from a single two part prompt (not the one in the dataset). Only a few dozen failed to generate the rejected response, and in those cases I filled in the rejected output with a standard "as an AI" style refusal. The way I set things up caused any prompt formatting errors to automatically go into the REJECTED field, hopefully discouraging such behavior.
The dataset is mostly intended for ERP, so erotic situations are an outsized portion of the dataset. However, it isn't all sexual, with sport, adventure, etc content to provide a baseline.
### Downsides and Weaknesses
This dataset has not been manually cleaned, besides some basic search-and-replace. This dataset has been overseen in bursts (such content passed muster or the entire batch was rejected) but much of the content has yet to be read by a human. Some chosen responses have the AI acting for the user, I've removed what I've seen but this is a lot to manually filter. Despite my best efforts to prompt away from it, some rejected responses are... what seems to be... let's say, in a certain voice? I really tried to avoid it, and most of it was removed.
### Goals
This dataset is intended to be used to produce a BETTER Iambe, that can itself produce even more data of higher quality. Bootstraps to the moon, baby! So, while this dataset still has rough points, I feel like it's worth uploading and starting to train the model. I also want to combine data generated by more models, as to not just amplify what is already Iambe's signal.
### Bonus
I've also added the notebook I used to train athirdpath/Iambe-20b-v3_TEST-RP_cDPO on runpod, adapted from a Kaggle by @maximelabonne (thank you!) | [
"### Description\n\n\\~3.4k DPO pairs, generated by Iambe feat. GPT-4 (~10% GPT-4, ~80% Iambe @ q5_k_m / ~10% Iambe @ q6_k) with temp 1.2 and min_p 0.15.\n\nThey are shuffled this time, as I was not aware that TRL did not do that automatically until I could see the shifts in the dataset mirrored in the loss patterns.\n\nIambe is a smart girl, so both the chosen and rejected for each pair are generated at the same time from a single two part prompt (not the one in the dataset). Only a few dozen failed to generate the rejected response, and in those cases I filled in the rejected output with a standard \"as an AI\" style refusal. The way I set things up caused any prompt formatting errors to automatically go into the REJECTED field, hopefully discouraging such behavior.\n\nThe dataset is mostly intended for ERP, so erotic situations are an outsized portion of the dataset. However, it isn't all sexual, with sport, adventure, etc content to provide a baseline.",
"### Downsides and Weaknesses\n\nThis dataset has not been manually cleaned, besides some basic search-and-replace. This dataset has been overseen in bursts (such content passed muster or the entire batch was rejected) but much of the content has yet to be read by a human. Some chosen responses have the AI acting for the user, I've removed what I've seen but this is a lot to manually filter. Despite my best efforts to prompt away from it, some rejected responses are... what seems to be... let's say, in a certain voice? I really tried to avoid it, and most of it was removed.",
"### Goals\n\nThis dataset is intended to be used to produce a BETTER Iambe, that can itself produce even more data of higher quality. Bootstraps to the moon, baby! So, while this dataset still has rough points, I feel like it's worth uploading and starting to train the model. I also want to combine data generated by more models, as to not just amplify what is already Iambe's signal.",
"### Bonus\n\nI've also added the notebook I used to train athirdpath/Iambe-20b-v3_TEST-RP_cDPO on runpod, adapted from a Kaggle by @maximelabonne (thank you!)"
] | [
"TAGS\n#language-English #license-cc-by-nc-4.0 #not-for-all-audiences #region-us \n",
"### Description\n\n\\~3.4k DPO pairs, generated by Iambe feat. GPT-4 (~10% GPT-4, ~80% Iambe @ q5_k_m / ~10% Iambe @ q6_k) with temp 1.2 and min_p 0.15.\n\nThey are shuffled this time, as I was not aware that TRL did not do that automatically until I could see the shifts in the dataset mirrored in the loss patterns.\n\nIambe is a smart girl, so both the chosen and rejected for each pair are generated at the same time from a single two part prompt (not the one in the dataset). Only a few dozen failed to generate the rejected response, and in those cases I filled in the rejected output with a standard \"as an AI\" style refusal. The way I set things up caused any prompt formatting errors to automatically go into the REJECTED field, hopefully discouraging such behavior.\n\nThe dataset is mostly intended for ERP, so erotic situations are an outsized portion of the dataset. However, it isn't all sexual, with sport, adventure, etc content to provide a baseline.",
"### Downsides and Weaknesses\n\nThis dataset has not been manually cleaned, besides some basic search-and-replace. This dataset has been overseen in bursts (such content passed muster or the entire batch was rejected) but much of the content has yet to be read by a human. Some chosen responses have the AI acting for the user, I've removed what I've seen but this is a lot to manually filter. Despite my best efforts to prompt away from it, some rejected responses are... what seems to be... let's say, in a certain voice? I really tried to avoid it, and most of it was removed.",
"### Goals\n\nThis dataset is intended to be used to produce a BETTER Iambe, that can itself produce even more data of higher quality. Bootstraps to the moon, baby! So, while this dataset still has rough points, I feel like it's worth uploading and starting to train the model. I also want to combine data generated by more models, as to not just amplify what is already Iambe's signal.",
"### Bonus\n\nI've also added the notebook I used to train athirdpath/Iambe-20b-v3_TEST-RP_cDPO on runpod, adapted from a Kaggle by @maximelabonne (thank you!)"
] | [
30,
257,
155,
95,
57
] | [
"passage: TAGS\n#language-English #license-cc-by-nc-4.0 #not-for-all-audiences #region-us \n### Description\n\n\\~3.4k DPO pairs, generated by Iambe feat. GPT-4 (~10% GPT-4, ~80% Iambe @ q5_k_m / ~10% Iambe @ q6_k) with temp 1.2 and min_p 0.15.\n\nThey are shuffled this time, as I was not aware that TRL did not do that automatically until I could see the shifts in the dataset mirrored in the loss patterns.\n\nIambe is a smart girl, so both the chosen and rejected for each pair are generated at the same time from a single two part prompt (not the one in the dataset). Only a few dozen failed to generate the rejected response, and in those cases I filled in the rejected output with a standard \"as an AI\" style refusal. The way I set things up caused any prompt formatting errors to automatically go into the REJECTED field, hopefully discouraging such behavior.\n\nThe dataset is mostly intended for ERP, so erotic situations are an outsized portion of the dataset. However, it isn't all sexual, with sport, adventure, etc content to provide a baseline.### Downsides and Weaknesses\n\nThis dataset has not been manually cleaned, besides some basic search-and-replace. This dataset has been overseen in bursts (such content passed muster or the entire batch was rejected) but much of the content has yet to be read by a human. Some chosen responses have the AI acting for the user, I've removed what I've seen but this is a lot to manually filter. Despite my best efforts to prompt away from it, some rejected responses are... what seems to be... let's say, in a certain voice? I really tried to avoid it, and most of it was removed."
] |
a23fc43871a549f621d62a4e95202c2d86abc45c |
# Dataset Card for Evaluation run of AA051610/A11P
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/AA051610/A11P
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [AA051610/A11P](https://huggingface.co/AA051610/A11P) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AA051610__A11P",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T02:59:53.573351](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__A11P/blob/main/results_2023-12-11T02-59-53.573351.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7024107525999363,
"acc_stderr": 0.030362293861859797,
"acc_norm": 0.7062972608094896,
"acc_norm_stderr": 0.030951825247607496,
"mc1": 0.41370869033047736,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.5644074616941972,
"mc2_stderr": 0.015397066221595713
},
"harness|arc:challenge|25": {
"acc": 0.6040955631399317,
"acc_stderr": 0.014291228393536587,
"acc_norm": 0.6254266211604096,
"acc_norm_stderr": 0.014144193471893449
},
"harness|hellaswag|10": {
"acc": 0.6191993626767576,
"acc_stderr": 0.004845912857338663,
"acc_norm": 0.8253335988846843,
"acc_norm_stderr": 0.003789055487003176
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8355263157894737,
"acc_stderr": 0.030167533468632726,
"acc_norm": 0.8355263157894737,
"acc_norm_stderr": 0.030167533468632726
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7358490566037735,
"acc_stderr": 0.027134291628741706,
"acc_norm": 0.7358490566037735,
"acc_norm_stderr": 0.027134291628741706
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7916666666666666,
"acc_stderr": 0.03396116205845335,
"acc_norm": 0.7916666666666666,
"acc_norm_stderr": 0.03396116205845335
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.8,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7063829787234043,
"acc_stderr": 0.02977164271249123,
"acc_norm": 0.7063829787234043,
"acc_norm_stderr": 0.02977164271249123
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5526315789473685,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.5526315789473685,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7310344827586207,
"acc_stderr": 0.036951833116502325,
"acc_norm": 0.7310344827586207,
"acc_norm_stderr": 0.036951833116502325
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5476190476190477,
"acc_stderr": 0.025634258115554955,
"acc_norm": 0.5476190476190477,
"acc_norm_stderr": 0.025634258115554955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8548387096774194,
"acc_stderr": 0.02003956362805328,
"acc_norm": 0.8548387096774194,
"acc_norm_stderr": 0.02003956362805328
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.034991131376767445,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.034991131376767445
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.03087414513656208,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.03087414513656208
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8787878787878788,
"acc_stderr": 0.02325315795194208,
"acc_norm": 0.8787878787878788,
"acc_norm_stderr": 0.02325315795194208
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9222797927461139,
"acc_stderr": 0.019321805557223144,
"acc_norm": 0.9222797927461139,
"acc_norm_stderr": 0.019321805557223144
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7692307692307693,
"acc_stderr": 0.02136202772522272,
"acc_norm": 0.7692307692307693,
"acc_norm_stderr": 0.02136202772522272
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251976,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7857142857142857,
"acc_stderr": 0.026653531596715484,
"acc_norm": 0.7857142857142857,
"acc_norm_stderr": 0.026653531596715484
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.40397350993377484,
"acc_stderr": 0.040064856853653415,
"acc_norm": 0.40397350993377484,
"acc_norm_stderr": 0.040064856853653415
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8825688073394495,
"acc_stderr": 0.01380278022737734,
"acc_norm": 0.8825688073394495,
"acc_norm_stderr": 0.01380278022737734
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8970588235294118,
"acc_stderr": 0.021328337570804365,
"acc_norm": 0.8970588235294118,
"acc_norm_stderr": 0.021328337570804365
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8734177215189873,
"acc_stderr": 0.021644195727955173,
"acc_norm": 0.8734177215189873,
"acc_norm_stderr": 0.021644195727955173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7802690582959642,
"acc_stderr": 0.02779017706438359,
"acc_norm": 0.7802690582959642,
"acc_norm_stderr": 0.02779017706438359
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.03088466108951539,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.03088466108951539
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8429752066115702,
"acc_stderr": 0.03321244842547129,
"acc_norm": 0.8429752066115702,
"acc_norm_stderr": 0.03321244842547129
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8703703703703703,
"acc_stderr": 0.032472243899179465,
"acc_norm": 0.8703703703703703,
"acc_norm_stderr": 0.032472243899179465
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8282208588957055,
"acc_stderr": 0.029634717272371047,
"acc_norm": 0.8282208588957055,
"acc_norm_stderr": 0.029634717272371047
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.04616143075028546,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.04616143075028546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.0376017800602662,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.0376017800602662
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.905982905982906,
"acc_stderr": 0.019119892798924978,
"acc_norm": 0.905982905982906,
"acc_norm_stderr": 0.019119892798924978
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8850574712643678,
"acc_stderr": 0.01140572072459397,
"acc_norm": 0.8850574712643678,
"acc_norm_stderr": 0.01140572072459397
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7774566473988439,
"acc_stderr": 0.02239421566194282,
"acc_norm": 0.7774566473988439,
"acc_norm_stderr": 0.02239421566194282
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42793296089385474,
"acc_stderr": 0.01654788799741611,
"acc_norm": 0.42793296089385474,
"acc_norm_stderr": 0.01654788799741611
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7810457516339869,
"acc_stderr": 0.02367908986180772,
"acc_norm": 0.7810457516339869,
"acc_norm_stderr": 0.02367908986180772
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7717041800643086,
"acc_stderr": 0.02383930331139819,
"acc_norm": 0.7717041800643086,
"acc_norm_stderr": 0.02383930331139819
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7808641975308642,
"acc_stderr": 0.023016705640262185,
"acc_norm": 0.7808641975308642,
"acc_norm_stderr": 0.023016705640262185
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5425531914893617,
"acc_stderr": 0.029719281272236844,
"acc_norm": 0.5425531914893617,
"acc_norm_stderr": 0.029719281272236844
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5338983050847458,
"acc_stderr": 0.012740853872949839,
"acc_norm": 0.5338983050847458,
"acc_norm_stderr": 0.012740853872949839
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7132352941176471,
"acc_stderr": 0.027472274473233818,
"acc_norm": 0.7132352941176471,
"acc_norm_stderr": 0.027472274473233818
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.01755581809132227,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.01755581809132227
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.043091187099464585,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.043091187099464585
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.02783302387139968,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.02783302387139968
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.024112678240900826,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.024112678240900826
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.02876234912646613,
"acc_norm": 0.91,
"acc_norm_stderr": 0.02876234912646613
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.027539122889061445,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.027539122889061445
},
"harness|truthfulqa:mc|0": {
"mc1": 0.41370869033047736,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.5644074616941972,
"mc2_stderr": 0.015397066221595713
},
"harness|winogrande|5": {
"acc": 0.7987371744277821,
"acc_stderr": 0.011268519971577684
},
"harness|gsm8k|5": {
"acc": 0.604245640636846,
"acc_stderr": 0.013469823701048815
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_AA051610__A11P | [
"region:us"
] | 2023-12-11T03:02:43+00:00 | {"pretty_name": "Evaluation run of AA051610/A11P", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051610/A11P](https://huggingface.co/AA051610/A11P) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051610__A11P\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T02:59:53.573351](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051610__A11P/blob/main/results_2023-12-11T02-59-53.573351.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7024107525999363,\n \"acc_stderr\": 0.030362293861859797,\n \"acc_norm\": 0.7062972608094896,\n \"acc_norm_stderr\": 0.030951825247607496,\n \"mc1\": 0.41370869033047736,\n \"mc1_stderr\": 0.0172408618120998,\n \"mc2\": 0.5644074616941972,\n \"mc2_stderr\": 0.015397066221595713\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6040955631399317,\n \"acc_stderr\": 0.014291228393536587,\n \"acc_norm\": 0.6254266211604096,\n \"acc_norm_stderr\": 0.014144193471893449\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6191993626767576,\n \"acc_stderr\": 0.004845912857338663,\n \"acc_norm\": 0.8253335988846843,\n \"acc_norm_stderr\": 0.003789055487003176\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8355263157894737,\n \"acc_stderr\": 0.030167533468632726,\n \"acc_norm\": 0.8355263157894737,\n \"acc_norm_stderr\": 0.030167533468632726\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7358490566037735,\n \"acc_stderr\": 0.027134291628741706,\n \"acc_norm\": 0.7358490566037735,\n \"acc_norm_stderr\": 0.027134291628741706\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n \"acc_stderr\": 0.03396116205845335,\n \"acc_norm\": 0.7916666666666666,\n \"acc_norm_stderr\": 0.03396116205845335\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7063829787234043,\n \"acc_stderr\": 0.02977164271249123,\n \"acc_norm\": 0.7063829787234043,\n \"acc_norm_stderr\": 0.02977164271249123\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5526315789473685,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.5526315789473685,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7310344827586207,\n \"acc_stderr\": 0.036951833116502325,\n \"acc_norm\": 0.7310344827586207,\n \"acc_norm_stderr\": 0.036951833116502325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.5476190476190477,\n \"acc_stderr\": 0.025634258115554955,\n \"acc_norm\": 0.5476190476190477,\n \"acc_norm_stderr\": 0.025634258115554955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8548387096774194,\n \"acc_stderr\": 0.02003956362805328,\n \"acc_norm\": 0.8548387096774194,\n \"acc_norm_stderr\": 0.02003956362805328\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.034991131376767445,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.034991131376767445\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656208,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656208\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8787878787878788,\n \"acc_stderr\": 0.02325315795194208,\n \"acc_norm\": 0.8787878787878788,\n \"acc_norm_stderr\": 0.02325315795194208\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.019321805557223144,\n \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.019321805557223144\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7692307692307693,\n \"acc_stderr\": 0.02136202772522272,\n \"acc_norm\": 0.7692307692307693,\n \"acc_norm_stderr\": 0.02136202772522272\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7857142857142857,\n \"acc_stderr\": 0.026653531596715484,\n \"acc_norm\": 0.7857142857142857,\n \"acc_norm_stderr\": 0.026653531596715484\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.40397350993377484,\n \"acc_stderr\": 0.040064856853653415,\n \"acc_norm\": 0.40397350993377484,\n \"acc_norm_stderr\": 0.040064856853653415\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8825688073394495,\n \"acc_stderr\": 0.01380278022737734,\n \"acc_norm\": 0.8825688073394495,\n \"acc_norm_stderr\": 0.01380278022737734\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8970588235294118,\n \"acc_stderr\": 0.021328337570804365,\n \"acc_norm\": 0.8970588235294118,\n \"acc_norm_stderr\": 0.021328337570804365\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8734177215189873,\n \"acc_stderr\": 0.021644195727955173,\n \"acc_norm\": 0.8734177215189873,\n \"acc_norm_stderr\": 0.021644195727955173\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7802690582959642,\n \"acc_stderr\": 0.02779017706438359,\n \"acc_norm\": 0.7802690582959642,\n \"acc_norm_stderr\": 0.02779017706438359\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.03088466108951539,\n \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.03088466108951539\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8429752066115702,\n \"acc_stderr\": 0.03321244842547129,\n \"acc_norm\": 0.8429752066115702,\n \"acc_norm_stderr\": 0.03321244842547129\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8703703703703703,\n \"acc_stderr\": 0.032472243899179465,\n \"acc_norm\": 0.8703703703703703,\n \"acc_norm_stderr\": 0.032472243899179465\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8282208588957055,\n \"acc_stderr\": 0.029634717272371047,\n \"acc_norm\": 0.8282208588957055,\n \"acc_norm_stderr\": 0.029634717272371047\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n \"acc_stderr\": 0.04616143075028546,\n \"acc_norm\": 0.6160714285714286,\n \"acc_norm_stderr\": 0.04616143075028546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.0376017800602662,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.0376017800602662\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n \"acc_stderr\": 0.019119892798924978,\n \"acc_norm\": 0.905982905982906,\n \"acc_norm_stderr\": 0.019119892798924978\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8850574712643678,\n \"acc_stderr\": 0.01140572072459397,\n \"acc_norm\": 0.8850574712643678,\n \"acc_norm_stderr\": 0.01140572072459397\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42793296089385474,\n \"acc_stderr\": 0.01654788799741611,\n \"acc_norm\": 0.42793296089385474,\n \"acc_norm_stderr\": 0.01654788799741611\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7810457516339869,\n \"acc_stderr\": 0.02367908986180772,\n \"acc_norm\": 0.7810457516339869,\n \"acc_norm_stderr\": 0.02367908986180772\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7717041800643086,\n \"acc_stderr\": 0.02383930331139819,\n \"acc_norm\": 0.7717041800643086,\n \"acc_norm_stderr\": 0.02383930331139819\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.023016705640262185,\n \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.023016705640262185\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5425531914893617,\n \"acc_stderr\": 0.029719281272236844,\n \"acc_norm\": 0.5425531914893617,\n \"acc_norm_stderr\": 0.029719281272236844\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5338983050847458,\n \"acc_stderr\": 0.012740853872949839,\n \"acc_norm\": 0.5338983050847458,\n \"acc_norm_stderr\": 0.012740853872949839\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7132352941176471,\n \"acc_stderr\": 0.027472274473233818,\n \"acc_norm\": 0.7132352941176471,\n \"acc_norm_stderr\": 0.027472274473233818\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.01755581809132227,\n \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.01755581809132227\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.02783302387139968,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.02783302387139968\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.024112678240900826,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.024112678240900826\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646613,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646613\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061445,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061445\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.41370869033047736,\n \"mc1_stderr\": 0.0172408618120998,\n \"mc2\": 0.5644074616941972,\n \"mc2_stderr\": 0.015397066221595713\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7987371744277821,\n \"acc_stderr\": 0.011268519971577684\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.604245640636846,\n \"acc_stderr\": 0.013469823701048815\n }\n}\n```", "repo_url": "https://huggingface.co/AA051610/A11P", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|arc:challenge|25_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|gsm8k|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hellaswag|10_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T02-59-53.573351.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["**/details_harness|winogrande|5_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T02-59-53.573351.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T02_59_53.573351", "path": ["results_2023-12-11T02-59-53.573351.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T02-59-53.573351.parquet"]}]}]} | 2023-12-11T03:03:23+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of AA051610/A11P
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model AA051610/A11P on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T02:59:53.573351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of AA051610/A11P",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/A11P on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T02:59:53.573351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of AA051610/A11P",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/A11P on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T02:59:53.573351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
17,
31,
166,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AA051610/A11P## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AA051610/A11P on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T02:59:53.573351(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
72588239fbc17d7048c5f79037b6e0bb4524b31a |
# Dataset Card for Evaluation run of rwitz/dec10
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rwitz/dec10
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rwitz/dec10](https://huggingface.co/rwitz/dec10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rwitz__dec10",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T03:10:59.161265](https://huggingface.co/datasets/open-llm-leaderboard/details_rwitz__dec10/blob/main/results_2023-12-11T03-10-59.161265.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6540294607487833,
"acc_stderr": 0.032048882469360766,
"acc_norm": 0.6541030274313245,
"acc_norm_stderr": 0.03270870495285761,
"mc1": 0.4504283965728274,
"mc1_stderr": 0.017417264371967646,
"mc2": 0.6041998017095335,
"mc2_stderr": 0.015386323767333891
},
"harness|arc:challenge|25": {
"acc": 0.6638225255972696,
"acc_stderr": 0.013804855026205765,
"acc_norm": 0.6911262798634812,
"acc_norm_stderr": 0.013501770929344003
},
"harness|hellaswag|10": {
"acc": 0.6823341963752241,
"acc_stderr": 0.004646172373101,
"acc_norm": 0.8645688109938259,
"acc_norm_stderr": 0.0034148422365171
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754407,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754407
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6127659574468085,
"acc_stderr": 0.03184389265339526,
"acc_norm": 0.6127659574468085,
"acc_norm_stderr": 0.03184389265339526
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.02559185776138219,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.02559185776138219
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229872,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229872
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603348,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603348
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971128,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971128
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131147,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131147
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8605504587155963,
"acc_stderr": 0.014852421490033053,
"acc_norm": 0.8605504587155963,
"acc_norm_stderr": 0.014852421490033053
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.02531049537694486,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.02531049537694486
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8314176245210728,
"acc_stderr": 0.013387895731543604,
"acc_norm": 0.8314176245210728,
"acc_norm_stderr": 0.013387895731543604
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39888268156424583,
"acc_stderr": 0.016376966142610076,
"acc_norm": 0.39888268156424583,
"acc_norm_stderr": 0.016376966142610076
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.025646863097137897,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.025646863097137897
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5,
"acc_stderr": 0.029827499313594685,
"acc_norm": 0.5,
"acc_norm_stderr": 0.029827499313594685
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4745762711864407,
"acc_stderr": 0.012753716929101006,
"acc_norm": 0.4745762711864407,
"acc_norm_stderr": 0.012753716929101006
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.02824568739146293,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.02824568739146293
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.01890101532209309,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.01890101532209309
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640044,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640044
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4504283965728274,
"mc1_stderr": 0.017417264371967646,
"mc2": 0.6041998017095335,
"mc2_stderr": 0.015386323767333891
},
"harness|winogrande|5": {
"acc": 0.8074191002367798,
"acc_stderr": 0.011082538847491904
},
"harness|gsm8k|5": {
"acc": 0.7058377558756633,
"acc_stderr": 0.012551285331470152
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_rwitz__dec10 | [
"region:us"
] | 2023-12-11T03:11:20+00:00 | {"pretty_name": "Evaluation run of rwitz/dec10", "dataset_summary": "Dataset automatically created during the evaluation run of model [rwitz/dec10](https://huggingface.co/rwitz/dec10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rwitz__dec10\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T03:10:59.161265](https://huggingface.co/datasets/open-llm-leaderboard/details_rwitz__dec10/blob/main/results_2023-12-11T03-10-59.161265.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6540294607487833,\n \"acc_stderr\": 0.032048882469360766,\n \"acc_norm\": 0.6541030274313245,\n \"acc_norm_stderr\": 0.03270870495285761,\n \"mc1\": 0.4504283965728274,\n \"mc1_stderr\": 0.017417264371967646,\n \"mc2\": 0.6041998017095335,\n \"mc2_stderr\": 0.015386323767333891\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6638225255972696,\n \"acc_stderr\": 0.013804855026205765,\n \"acc_norm\": 0.6911262798634812,\n \"acc_norm_stderr\": 0.013501770929344003\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6823341963752241,\n \"acc_stderr\": 0.004646172373101,\n \"acc_norm\": 0.8645688109938259,\n \"acc_norm_stderr\": 0.0034148422365171\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754407,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754407\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6127659574468085,\n \"acc_stderr\": 0.03184389265339526,\n \"acc_norm\": 0.6127659574468085,\n \"acc_norm_stderr\": 0.03184389265339526\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.02559185776138219,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.02559185776138219\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229872,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229872\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971128,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971128\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131147,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131147\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188704,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188704\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8605504587155963,\n \"acc_stderr\": 0.014852421490033053,\n \"acc_norm\": 0.8605504587155963,\n \"acc_norm_stderr\": 0.014852421490033053\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.02531049537694486,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.02531049537694486\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n \"acc_stderr\": 0.013387895731543604,\n \"acc_norm\": 0.8314176245210728,\n \"acc_norm_stderr\": 0.013387895731543604\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39888268156424583,\n \"acc_stderr\": 0.016376966142610076,\n \"acc_norm\": 0.39888268156424583,\n \"acc_norm_stderr\": 0.016376966142610076\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4745762711864407,\n \"acc_stderr\": 0.012753716929101006,\n \"acc_norm\": 0.4745762711864407,\n \"acc_norm_stderr\": 0.012753716929101006\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.02824568739146293,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.02824568739146293\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.01890101532209309,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.01890101532209309\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640044,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640044\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4504283965728274,\n \"mc1_stderr\": 0.017417264371967646,\n \"mc2\": 0.6041998017095335,\n \"mc2_stderr\": 0.015386323767333891\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8074191002367798,\n \"acc_stderr\": 0.011082538847491904\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7058377558756633,\n \"acc_stderr\": 0.012551285331470152\n }\n}\n```", "repo_url": "https://huggingface.co/rwitz/dec10", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|arc:challenge|25_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|arc:challenge|25_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|gsm8k|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|gsm8k|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hellaswag|10_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hellaswag|10_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T03-08-28.006278.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T03-10-59.161265.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["**/details_harness|winogrande|5_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["**/details_harness|winogrande|5_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T03-10-59.161265.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T03_08_28.006278", "path": ["results_2023-12-11T03-08-28.006278.parquet"]}, {"split": "2023_12_11T03_10_59.161265", "path": ["results_2023-12-11T03-10-59.161265.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T03-10-59.161265.parquet"]}]}]} | 2023-12-11T03:14:31+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rwitz/dec10
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rwitz/dec10 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T03:10:59.161265(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of rwitz/dec10",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rwitz/dec10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T03:10:59.161265(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rwitz/dec10",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rwitz/dec10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T03:10:59.161265(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
14,
31,
163,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rwitz/dec10## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rwitz/dec10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T03:10:59.161265(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
59b1385678c9791d3e6687024d171453e96fd3b9 | # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-b-v2-test-paraphrase-2 | [
"region:us"
] | 2023-12-11T03:26:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "paraphrase", "dtype": "string"}, {"name": "paraphrase2", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 11109023, "num_examples": 3000}], "download_size": 5184022, "dataset_size": 11109023}} | 2023-12-11T03:26:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-2"
More Information needed | [
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2\"\n\nMore Information needed"
] |
9606fb773ca54799da98d4d845e0e5137015a2ca | # Dataset Card for "ocr_redseal_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/ocr_redseal_vi | [
"region:us"
] | 2023-12-11T03:35:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "Noise_level", "dtype": "int64"}, {"name": "meta", "struct": [{"name": "path", "dtype": "string"}, {"name": "subset", "dtype": "string"}]}, {"name": "path", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 5525894060.375, "num_examples": 558181}], "download_size": 5529921734, "dataset_size": 5525894060.375}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T03:42:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ocr_redseal_vi"
More Information needed | [
"# Dataset Card for \"ocr_redseal_vi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ocr_redseal_vi\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ocr_redseal_vi\"\n\nMore Information needed"
] |
a7372383ab8a39553c2d2e8f6311a3dea611f9c6 | # Dataset Card for "ocr_sohieu_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/ocr_sohieu_vi | [
"region:us"
] | 2023-12-11T03:48:57+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "path", "dtype": "string"}, {"name": "subset", "dtype": "string"}]}, {"name": "path", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 4268072.0, "num_examples": 644}], "download_size": 4266549, "dataset_size": 4268072.0}} | 2023-12-11T03:49:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ocr_sohieu_vi"
More Information needed | [
"# Dataset Card for \"ocr_sohieu_vi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ocr_sohieu_vi\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ocr_sohieu_vi\"\n\nMore Information needed"
] |
5dd14d422935ec6e09b9a9887266d1acd7bb51da | # Dataset Card for "ocr_uppercase_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/ocr_uppercase_vi | [
"region:us"
] | 2023-12-11T03:51:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "path", "dtype": "string"}, {"name": "subset", "dtype": "string"}]}, {"name": "path", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 499271753.625, "num_examples": 12323}], "download_size": 499066965, "dataset_size": 499271753.625}} | 2023-12-11T03:51:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ocr_uppercase_vi"
More Information needed | [
"# Dataset Card for \"ocr_uppercase_vi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ocr_uppercase_vi\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ocr_uppercase_vi\"\n\nMore Information needed"
] |
4a2c3f3b0d7420b73b83b3615c48554533f7603e | # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-2-mistral-7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/semeval-task-8-b-v2-test-paraphrase-2-mistral-7b | [
"region:us"
] | 2023-12-11T03:56:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "paraphrase", "dtype": "string"}, {"name": "paraphrase2", "dtype": "string"}, {"name": "mistral-7b_estimated_loss", "dtype": "float64"}, {"name": "mistral-7b_mean_lowest25", "dtype": "float64"}, {"name": "mistral-7b_mean_highest25", "dtype": "float64"}, {"name": "mistral-7b_max", "dtype": "float64"}, {"name": "mistral-7b_min", "dtype": "float64"}, {"name": "mistral-7b_range", "dtype": "float64"}, {"name": "mistral-7b_mean", "dtype": "float64"}, {"name": "mistral-7b_std", "dtype": "float64"}, {"name": "mistral-7b_entropy", "dtype": "float64"}, {"name": "mistral-7b_kurtosis", "dtype": "float64"}, {"name": "mistral-7b_skewness", "dtype": "float64"}, {"name": "mistral-7b_perplexity", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 11397023, "num_examples": 3000}], "download_size": 5515131, "dataset_size": 11397023}} | 2023-12-11T15:29:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "semeval-task-8-b-v2-test-paraphrase-2-mistral-7b"
More Information needed | [
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2-mistral-7b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2-mistral-7b\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"semeval-task-8-b-v2-test-paraphrase-2-mistral-7b\"\n\nMore Information needed"
] |
d7cddb4d5050bc87d515f2dcfdae1f9d8f89b1d0 | # Dataset Card for "ocr_fontsEnhance_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/ocr_fontsEnhance_vi | [
"region:us"
] | 2023-12-11T04:06:21+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "path", "dtype": "string"}, {"name": "subset", "dtype": "string"}]}, {"name": "path", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 2715797840.875, "num_examples": 125753}], "download_size": 2712543570, "dataset_size": 2715797840.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T04:09:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ocr_fontsEnhance_vi"
More Information needed | [
"# Dataset Card for \"ocr_fontsEnhance_vi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ocr_fontsEnhance_vi\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ocr_fontsEnhance_vi\"\n\nMore Information needed"
] |
263c0469d90875d6568ebe32fd56662f99308fab | # Dataset Card for "ocr_nameEntityRed_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ademax/ocr_nameEntityRed_vi | [
"region:us"
] | 2023-12-11T04:14:33+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "meta", "struct": [{"name": "path", "dtype": "string"}, {"name": "subset", "dtype": "string"}]}, {"name": "path", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 348986062.5, "num_examples": 57500}], "download_size": 352082024, "dataset_size": 348986062.5}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T04:15:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ocr_nameEntityRed_vi"
More Information needed | [
"# Dataset Card for \"ocr_nameEntityRed_vi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ocr_nameEntityRed_vi\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ocr_nameEntityRed_vi\"\n\nMore Information needed"
] |
aa376010d551ff21f1275f442d77ebf9b6278056 | I deep-cleaned a version of the Locutusque/InstructMix-V2 dataset, which was originally quite large. To manage this, I sampled 123,590 examples from the dataset and then removed any OpenAI refusals, referencing the refusals list from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered. The resulting dataset now contains 106,149 cleaned examples. | Locutusque/inst_mix_v2_top_100k | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-12-11T04:14:41+00:00 | {"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational", "question-answering"]} | 2023-12-14T01:56:46+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us
| I deep-cleaned a version of the Locutusque/InstructMix-V2 dataset, which was originally quite large. To manage this, I sampled 123,590 examples from the dataset and then removed any OpenAI refusals, referencing the refusals list from URL The resulting dataset now contains 106,149 cleaned examples. | [] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us \n"
] | [
55
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-100K<n<1M #language-English #region-us \n"
] |
d3b7067b6cf27df84b9c698f93c9702efd148aca |
# Dataset Card for Evaluation run of cyberagent/calm2-7b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/cyberagent/calm2-7b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cyberagent__calm2-7b-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T04:41:23.645738](https://huggingface.co/datasets/open-llm-leaderboard/details_cyberagent__calm2-7b-chat/blob/main/results_2023-12-11T04-41-23.645738.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.39379663191330316,
"acc_stderr": 0.03433785284156447,
"acc_norm": 0.39896146189258175,
"acc_norm_stderr": 0.0351728212913433,
"mc1": 0.2607099143206854,
"mc1_stderr": 0.015368841620766367,
"mc2": 0.4196186456267839,
"mc2_stderr": 0.01433169483869778
},
"harness|arc:challenge|25": {
"acc": 0.3609215017064846,
"acc_stderr": 0.014034761386175458,
"acc_norm": 0.40273037542662116,
"acc_norm_stderr": 0.014332236306790147
},
"harness|hellaswag|10": {
"acc": 0.5070703047201752,
"acc_stderr": 0.004989282516055394,
"acc_norm": 0.68123879705238,
"acc_norm_stderr": 0.004650438781745311
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847415,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847415
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.04304979692464242,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.04304979692464242
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.04060127035236397,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.04060127035236397
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.43018867924528303,
"acc_stderr": 0.030471445867183235,
"acc_norm": 0.43018867924528303,
"acc_norm_stderr": 0.030471445867183235
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.375,
"acc_stderr": 0.04048439222695598,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04048439222695598
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206824,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206824
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3699421965317919,
"acc_stderr": 0.036812296333943194,
"acc_norm": 0.3699421965317919,
"acc_norm_stderr": 0.036812296333943194
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.04280105837364395,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.04280105837364395
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.33191489361702126,
"acc_stderr": 0.03078373675774564,
"acc_norm": 0.33191489361702126,
"acc_norm_stderr": 0.03078373675774564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.04303684033537315,
"acc_norm": 0.2982456140350877,
"acc_norm_stderr": 0.04303684033537315
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.38620689655172413,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.38620689655172413,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2804232804232804,
"acc_stderr": 0.023135287974325635,
"acc_norm": 0.2804232804232804,
"acc_norm_stderr": 0.023135287974325635
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.04073524322147126,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.04073524322147126
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.36774193548387096,
"acc_stderr": 0.02743086657997347,
"acc_norm": 0.36774193548387096,
"acc_norm_stderr": 0.02743086657997347
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.32019704433497537,
"acc_stderr": 0.032826493853041504,
"acc_norm": 0.32019704433497537,
"acc_norm_stderr": 0.032826493853041504
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.48484848484848486,
"acc_stderr": 0.03902551007374448,
"acc_norm": 0.48484848484848486,
"acc_norm_stderr": 0.03902551007374448
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.035402943770953675,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.035402943770953675
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.5233160621761658,
"acc_stderr": 0.03604513672442202,
"acc_norm": 0.5233160621761658,
"acc_norm_stderr": 0.03604513672442202
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3076923076923077,
"acc_stderr": 0.023400928918310495,
"acc_norm": 0.3076923076923077,
"acc_norm_stderr": 0.023400928918310495
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24814814814814815,
"acc_stderr": 0.0263357394040558,
"acc_norm": 0.24814814814814815,
"acc_norm_stderr": 0.0263357394040558
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.029597329730978103,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.029597329730978103
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.43486238532110094,
"acc_stderr": 0.021254631465609273,
"acc_norm": 0.43486238532110094,
"acc_norm_stderr": 0.021254631465609273
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3287037037037037,
"acc_stderr": 0.032036140846700596,
"acc_norm": 0.3287037037037037,
"acc_norm_stderr": 0.032036140846700596
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.45588235294117646,
"acc_stderr": 0.03495624522015474,
"acc_norm": 0.45588235294117646,
"acc_norm_stderr": 0.03495624522015474
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.4978902953586498,
"acc_stderr": 0.032546938018020076,
"acc_norm": 0.4978902953586498,
"acc_norm_stderr": 0.032546938018020076
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.47085201793721976,
"acc_stderr": 0.03350073248773404,
"acc_norm": 0.47085201793721976,
"acc_norm_stderr": 0.03350073248773404
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.45038167938931295,
"acc_stderr": 0.04363643698524779,
"acc_norm": 0.45038167938931295,
"acc_norm_stderr": 0.04363643698524779
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5371900826446281,
"acc_stderr": 0.04551711196104218,
"acc_norm": 0.5371900826446281,
"acc_norm_stderr": 0.04551711196104218
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.4351851851851852,
"acc_stderr": 0.04792898170907062,
"acc_norm": 0.4351851851851852,
"acc_norm_stderr": 0.04792898170907062
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3803680981595092,
"acc_stderr": 0.03814269893261837,
"acc_norm": 0.3803680981595092,
"acc_norm_stderr": 0.03814269893261837
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.04327040932578728,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.04327040932578728
},
"harness|hendrycksTest-management|5": {
"acc": 0.3592233009708738,
"acc_stderr": 0.04750458399041693,
"acc_norm": 0.3592233009708738,
"acc_norm_stderr": 0.04750458399041693
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.5170940170940171,
"acc_stderr": 0.032736940493481824,
"acc_norm": 0.5170940170940171,
"acc_norm_stderr": 0.032736940493481824
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.52,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.52,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5070242656449553,
"acc_stderr": 0.017878199003432217,
"acc_norm": 0.5070242656449553,
"acc_norm_stderr": 0.017878199003432217
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.40173410404624277,
"acc_stderr": 0.02639410417764363,
"acc_norm": 0.40173410404624277,
"acc_norm_stderr": 0.02639410417764363
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2860335195530726,
"acc_stderr": 0.015113972129062138,
"acc_norm": 0.2860335195530726,
"acc_norm_stderr": 0.015113972129062138
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4150326797385621,
"acc_stderr": 0.0282135041778241,
"acc_norm": 0.4150326797385621,
"acc_norm_stderr": 0.0282135041778241
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3954983922829582,
"acc_stderr": 0.027770918531427838,
"acc_norm": 0.3954983922829582,
"acc_norm_stderr": 0.027770918531427838
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4104938271604938,
"acc_stderr": 0.027371350925124764,
"acc_norm": 0.4104938271604938,
"acc_norm_stderr": 0.027371350925124764
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3262411347517731,
"acc_stderr": 0.027968453043563168,
"acc_norm": 0.3262411347517731,
"acc_norm_stderr": 0.027968453043563168
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.32985658409387225,
"acc_stderr": 0.012008129938540472,
"acc_norm": 0.32985658409387225,
"acc_norm_stderr": 0.012008129938540472
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.40441176470588236,
"acc_stderr": 0.029812630701569736,
"acc_norm": 0.40441176470588236,
"acc_norm_stderr": 0.029812630701569736
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.01955964680921593,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.01955964680921593
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.43636363636363634,
"acc_stderr": 0.04750185058907297,
"acc_norm": 0.43636363636363634,
"acc_norm_stderr": 0.04750185058907297
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.49387755102040815,
"acc_stderr": 0.03200682020163907,
"acc_norm": 0.49387755102040815,
"acc_norm_stderr": 0.03200682020163907
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5024875621890548,
"acc_stderr": 0.03535490150137289,
"acc_norm": 0.5024875621890548,
"acc_norm_stderr": 0.03535490150137289
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-virology|5": {
"acc": 0.39759036144578314,
"acc_stderr": 0.03809973084540217,
"acc_norm": 0.39759036144578314,
"acc_norm_stderr": 0.03809973084540217
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5146198830409356,
"acc_stderr": 0.03833185275213025,
"acc_norm": 0.5146198830409356,
"acc_norm_stderr": 0.03833185275213025
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2607099143206854,
"mc1_stderr": 0.015368841620766367,
"mc2": 0.4196186456267839,
"mc2_stderr": 0.01433169483869778
},
"harness|winogrande|5": {
"acc": 0.6495659037095501,
"acc_stderr": 0.013409047676670187
},
"harness|gsm8k|5": {
"acc": 0.04927975739196361,
"acc_stderr": 0.005962150655812477
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_cyberagent__calm2-7b-chat | [
"region:us"
] | 2023-12-11T04:44:11+00:00 | {"pretty_name": "Evaluation run of cyberagent/calm2-7b-chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [cyberagent/calm2-7b-chat](https://huggingface.co/cyberagent/calm2-7b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cyberagent__calm2-7b-chat\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T04:41:23.645738](https://huggingface.co/datasets/open-llm-leaderboard/details_cyberagent__calm2-7b-chat/blob/main/results_2023-12-11T04-41-23.645738.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.39379663191330316,\n \"acc_stderr\": 0.03433785284156447,\n \"acc_norm\": 0.39896146189258175,\n \"acc_norm_stderr\": 0.0351728212913433,\n \"mc1\": 0.2607099143206854,\n \"mc1_stderr\": 0.015368841620766367,\n \"mc2\": 0.4196186456267839,\n \"mc2_stderr\": 0.01433169483869778\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3609215017064846,\n \"acc_stderr\": 0.014034761386175458,\n \"acc_norm\": 0.40273037542662116,\n \"acc_norm_stderr\": 0.014332236306790147\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5070703047201752,\n \"acc_stderr\": 0.004989282516055394,\n \"acc_norm\": 0.68123879705238,\n \"acc_norm_stderr\": 0.004650438781745311\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45925925925925926,\n \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.45925925925925926,\n \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.04060127035236397,\n \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.04060127035236397\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.43018867924528303,\n \"acc_stderr\": 0.030471445867183235,\n \"acc_norm\": 0.43018867924528303,\n \"acc_norm_stderr\": 0.030471445867183235\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.375,\n \"acc_stderr\": 0.04048439222695598,\n \"acc_norm\": 0.375,\n \"acc_norm_stderr\": 0.04048439222695598\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206824,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206824\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3699421965317919,\n \"acc_stderr\": 0.036812296333943194,\n \"acc_norm\": 0.3699421965317919,\n \"acc_norm_stderr\": 0.036812296333943194\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.04280105837364395,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.04280105837364395\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.33191489361702126,\n \"acc_stderr\": 0.03078373675774564,\n \"acc_norm\": 0.33191489361702126,\n \"acc_norm_stderr\": 0.03078373675774564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.04303684033537315,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.04303684033537315\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.38620689655172413,\n \"acc_stderr\": 0.04057324734419035,\n \"acc_norm\": 0.38620689655172413,\n \"acc_norm_stderr\": 0.04057324734419035\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2804232804232804,\n \"acc_stderr\": 0.023135287974325635,\n \"acc_norm\": 0.2804232804232804,\n \"acc_norm_stderr\": 0.023135287974325635\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.04073524322147126,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.04073524322147126\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.36774193548387096,\n \"acc_stderr\": 0.02743086657997347,\n \"acc_norm\": 0.36774193548387096,\n \"acc_norm_stderr\": 0.02743086657997347\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.32019704433497537,\n \"acc_stderr\": 0.032826493853041504,\n \"acc_norm\": 0.32019704433497537,\n \"acc_norm_stderr\": 0.032826493853041504\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.48484848484848486,\n \"acc_stderr\": 0.03902551007374448,\n \"acc_norm\": 0.48484848484848486,\n \"acc_norm_stderr\": 0.03902551007374448\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.035402943770953675,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.035402943770953675\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.5233160621761658,\n \"acc_stderr\": 0.03604513672442202,\n \"acc_norm\": 0.5233160621761658,\n \"acc_norm_stderr\": 0.03604513672442202\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3076923076923077,\n \"acc_stderr\": 0.023400928918310495,\n \"acc_norm\": 0.3076923076923077,\n \"acc_norm_stderr\": 0.023400928918310495\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24814814814814815,\n \"acc_stderr\": 0.0263357394040558,\n \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.0263357394040558\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.029597329730978103,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.029597329730978103\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.43486238532110094,\n \"acc_stderr\": 0.021254631465609273,\n \"acc_norm\": 0.43486238532110094,\n \"acc_norm_stderr\": 0.021254631465609273\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3287037037037037,\n \"acc_stderr\": 0.032036140846700596,\n \"acc_norm\": 0.3287037037037037,\n \"acc_norm_stderr\": 0.032036140846700596\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.45588235294117646,\n \"acc_stderr\": 0.03495624522015474,\n \"acc_norm\": 0.45588235294117646,\n \"acc_norm_stderr\": 0.03495624522015474\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.4978902953586498,\n \"acc_stderr\": 0.032546938018020076,\n \"acc_norm\": 0.4978902953586498,\n \"acc_norm_stderr\": 0.032546938018020076\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.47085201793721976,\n \"acc_stderr\": 0.03350073248773404,\n \"acc_norm\": 0.47085201793721976,\n \"acc_norm_stderr\": 0.03350073248773404\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.45038167938931295,\n \"acc_stderr\": 0.04363643698524779,\n \"acc_norm\": 0.45038167938931295,\n \"acc_norm_stderr\": 0.04363643698524779\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5371900826446281,\n \"acc_stderr\": 0.04551711196104218,\n \"acc_norm\": 0.5371900826446281,\n \"acc_norm_stderr\": 0.04551711196104218\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.04792898170907062,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.04792898170907062\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3803680981595092,\n \"acc_stderr\": 0.03814269893261837,\n \"acc_norm\": 0.3803680981595092,\n \"acc_norm_stderr\": 0.03814269893261837\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n \"acc_stderr\": 0.04327040932578728,\n \"acc_norm\": 0.29464285714285715,\n \"acc_norm_stderr\": 0.04327040932578728\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.3592233009708738,\n \"acc_stderr\": 0.04750458399041693,\n \"acc_norm\": 0.3592233009708738,\n \"acc_norm_stderr\": 0.04750458399041693\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5170940170940171,\n \"acc_stderr\": 0.032736940493481824,\n \"acc_norm\": 0.5170940170940171,\n \"acc_norm_stderr\": 0.032736940493481824\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5070242656449553,\n \"acc_stderr\": 0.017878199003432217,\n \"acc_norm\": 0.5070242656449553,\n \"acc_norm_stderr\": 0.017878199003432217\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.40173410404624277,\n \"acc_stderr\": 0.02639410417764363,\n \"acc_norm\": 0.40173410404624277,\n \"acc_norm_stderr\": 0.02639410417764363\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2860335195530726,\n \"acc_stderr\": 0.015113972129062138,\n \"acc_norm\": 0.2860335195530726,\n \"acc_norm_stderr\": 0.015113972129062138\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4150326797385621,\n \"acc_stderr\": 0.0282135041778241,\n \"acc_norm\": 0.4150326797385621,\n \"acc_norm_stderr\": 0.0282135041778241\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3954983922829582,\n \"acc_stderr\": 0.027770918531427838,\n \"acc_norm\": 0.3954983922829582,\n \"acc_norm_stderr\": 0.027770918531427838\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4104938271604938,\n \"acc_stderr\": 0.027371350925124764,\n \"acc_norm\": 0.4104938271604938,\n \"acc_norm_stderr\": 0.027371350925124764\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3262411347517731,\n \"acc_stderr\": 0.027968453043563168,\n \"acc_norm\": 0.3262411347517731,\n \"acc_norm_stderr\": 0.027968453043563168\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.32985658409387225,\n \"acc_stderr\": 0.012008129938540472,\n \"acc_norm\": 0.32985658409387225,\n \"acc_norm_stderr\": 0.012008129938540472\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.40441176470588236,\n \"acc_stderr\": 0.029812630701569736,\n \"acc_norm\": 0.40441176470588236,\n \"acc_norm_stderr\": 0.029812630701569736\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.01955964680921593,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.01955964680921593\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.43636363636363634,\n \"acc_stderr\": 0.04750185058907297,\n \"acc_norm\": 0.43636363636363634,\n \"acc_norm_stderr\": 0.04750185058907297\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.49387755102040815,\n \"acc_stderr\": 0.03200682020163907,\n \"acc_norm\": 0.49387755102040815,\n \"acc_norm_stderr\": 0.03200682020163907\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5024875621890548,\n \"acc_stderr\": 0.03535490150137289,\n \"acc_norm\": 0.5024875621890548,\n \"acc_norm_stderr\": 0.03535490150137289\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.39759036144578314,\n \"acc_stderr\": 0.03809973084540217,\n \"acc_norm\": 0.39759036144578314,\n \"acc_norm_stderr\": 0.03809973084540217\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5146198830409356,\n \"acc_stderr\": 0.03833185275213025,\n \"acc_norm\": 0.5146198830409356,\n \"acc_norm_stderr\": 0.03833185275213025\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2607099143206854,\n \"mc1_stderr\": 0.015368841620766367,\n \"mc2\": 0.4196186456267839,\n \"mc2_stderr\": 0.01433169483869778\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6495659037095501,\n \"acc_stderr\": 0.013409047676670187\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04927975739196361,\n \"acc_stderr\": 0.005962150655812477\n }\n}\n```", "repo_url": "https://huggingface.co/cyberagent/calm2-7b-chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|arc:challenge|25_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|gsm8k|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hellaswag|10_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T04-41-23.645738.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["**/details_harness|winogrande|5_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T04-41-23.645738.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T04_41_23.645738", "path": ["results_2023-12-11T04-41-23.645738.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T04-41-23.645738.parquet"]}]}]} | 2023-12-11T04:45:04+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of cyberagent/calm2-7b-chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model cyberagent/calm2-7b-chat on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T04:41:23.645738(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of cyberagent/calm2-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model cyberagent/calm2-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T04:41:23.645738(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of cyberagent/calm2-7b-chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model cyberagent/calm2-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T04:41:23.645738(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
19,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of cyberagent/calm2-7b-chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model cyberagent/calm2-7b-chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T04:41:23.645738(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4b61064014345c150db4aca0ed01ace1c79dd1b3 | # Dataset Card for "sst2_non_affix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/sst2_non_affix | [
"region:us"
] | 2023-12-11T04:44:19+00:00 | {"dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}], "splits": [{"name": "validation", "num_bytes": 98088.14220183487, "num_examples": 805}], "download_size": 66484, "dataset_size": 98088.14220183487}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-11T04:44:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sst2_non_affix"
More Information needed | [
"# Dataset Card for \"sst2_non_affix\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sst2_non_affix\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"sst2_non_affix\"\n\nMore Information needed"
] |
fa128a8cb9d027b1dc3190564ed8a4e9b115456b | # Dataset Card for "VNTL-v2-2k-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lmg-anon/VNTL-v2-2k-small | [
"region:us"
] | 2023-12-11T04:47:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8479907, "num_examples": 1666}, {"name": "val", "num_bytes": 1012198, "num_examples": 199}], "download_size": 4197269, "dataset_size": 9492105}} | 2023-12-11T05:49:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "VNTL-v2-2k-small"
More Information needed | [
"# Dataset Card for \"VNTL-v2-2k-small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"VNTL-v2-2k-small\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"VNTL-v2-2k-small\"\n\nMore Information needed"
] |
5f33e213ae99a4886731e6d5859b5d44ece490a2 | # Dataset Card for "rt_non_affix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joey234/rt_non_affix | [
"region:us"
] | 2023-12-11T04:48:16+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "splits": [{"name": "test", "num_bytes": 122830.3789868668, "num_examples": 963}], "download_size": 79719, "dataset_size": 122830.3789868668}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]} | 2023-12-11T04:48:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rt_non_affix"
More Information needed | [
"# Dataset Card for \"rt_non_affix\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rt_non_affix\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"rt_non_affix\"\n\nMore Information needed"
] |
e7c686f8345da9f7e71bdef16dec3f9b48b11916 | # Dataset Card for "imdb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Yehoon/imdb | [
"region:us"
] | 2023-12-11T05:17:47+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13003200, "num_examples": 10195}], "download_size": 8156718, "dataset_size": 13003200}} | 2023-12-14T05:22:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "imdb"
More Information needed | [
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb\"\n\nMore Information needed"
] |
e7a2a91e12c1f0251d18b9b72e03693b40dd5911 | # Dataset Card for "gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Yehoon/gsm8k | [
"region:us"
] | 2023-12-11T05:28:04+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "options", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3993094, "num_examples": 7473}], "download_size": 2293586, "dataset_size": 3993094}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-19T05:00:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "gsm8k"
More Information needed | [
"# Dataset Card for \"gsm8k\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"gsm8k\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"gsm8k\"\n\nMore Information needed"
] |
38a48cbd263d57aa75a143349fa1a77a1bcf3459 | # wikiquote_tv
🤖 This README was written by GPT-4. 🤖
# Overview
This dataset, gathered from [wikiquote.org](https://en.wikiquote.org/), contains a comprehensive collection of quotes, actions, and conversations from various television shows. The dataset is particularly useful for training language models, offering a rich source of dialogues and narrative structures.
# Dataset Description
- **Content**: Quotes, actions, and conversations from a range of TV shows.
- **Source**: Extracted from [wikiquote.org](https://en.wikiquote.org/).
- **Structure**: The dataset includes data classes for `Action`, `Quote`, and `Conversation`, encapsulating individual elements of dialogues.
# Features
- **Parse Functionality**: Extracts quotes and actions from the HTML content of [wikiquote.org](https://en.wikiquote.org/) pages.
- **Comprehensive Coverage**: Includes shows across a wide range of genres and time periods.
- **Customizable**: Flexible to be used for a variety of NLP tasks and research.
# Usage
The dataset may be useful for:
- **Natural Language Understanding**: Understanding context, humor, and character dynamics in conversations.
- **Language Modeling**: Training models to generate dialogues or predict next lines in conversations.
- **Cultural Analysis**: Studying trends and themes across various television shows.
# License
This dataset and the accompanying code are released under the [MIT License](./LICENSE.md). The contents of the data are collected from [wikiquote.org](https://en.wikiquote.org/) as per [the repo code](https://huggingface.co/datasets/joshuasundance/wikiquote_tv/blob/main/wikiquote_tv.ipynb), and no ownership or rights are claimed over the data.
# Disclaimer
This dataset is intended for research and educational purposes.
# Contributions
Contributions are welcome! Feel free to submit issues or pull requests [on the HuggingFace repository](https://huggingface.co/datasets/joshuasundance/wikiquote_tv). | joshuasundance/wikiquote_tv | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"television",
"roleplaying",
"region:us"
] | 2023-12-11T05:31:25+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational"], "tags": ["television", "roleplaying"]} | 2023-12-11T05:49:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-mit #television #roleplaying #region-us
| # wikiquote_tv
This README was written by GPT-4.
# Overview
This dataset, gathered from URL, contains a comprehensive collection of quotes, actions, and conversations from various television shows. The dataset is particularly useful for training language models, offering a rich source of dialogues and narrative structures.
# Dataset Description
- Content: Quotes, actions, and conversations from a range of TV shows.
- Source: Extracted from URL.
- Structure: The dataset includes data classes for 'Action', 'Quote', and 'Conversation', encapsulating individual elements of dialogues.
# Features
- Parse Functionality: Extracts quotes and actions from the HTML content of URL pages.
- Comprehensive Coverage: Includes shows across a wide range of genres and time periods.
- Customizable: Flexible to be used for a variety of NLP tasks and research.
# Usage
The dataset may be useful for:
- Natural Language Understanding: Understanding context, humor, and character dynamics in conversations.
- Language Modeling: Training models to generate dialogues or predict next lines in conversations.
- Cultural Analysis: Studying trends and themes across various television shows.
# License
This dataset and the accompanying code are released under the MIT License. The contents of the data are collected from URL as per the repo code, and no ownership or rights are claimed over the data.
# Disclaimer
This dataset is intended for research and educational purposes.
# Contributions
Contributions are welcome! Feel free to submit issues or pull requests on the HuggingFace repository. | [
"# wikiquote_tv\n\n This README was written by GPT-4.",
"# Overview\nThis dataset, gathered from URL, contains a comprehensive collection of quotes, actions, and conversations from various television shows. The dataset is particularly useful for training language models, offering a rich source of dialogues and narrative structures.",
"# Dataset Description\n- Content: Quotes, actions, and conversations from a range of TV shows.\n- Source: Extracted from URL.\n- Structure: The dataset includes data classes for 'Action', 'Quote', and 'Conversation', encapsulating individual elements of dialogues.",
"# Features\n- Parse Functionality: Extracts quotes and actions from the HTML content of URL pages.\n- Comprehensive Coverage: Includes shows across a wide range of genres and time periods.\n- Customizable: Flexible to be used for a variety of NLP tasks and research.",
"# Usage\nThe dataset may be useful for:\n- Natural Language Understanding: Understanding context, humor, and character dynamics in conversations.\n- Language Modeling: Training models to generate dialogues or predict next lines in conversations.\n- Cultural Analysis: Studying trends and themes across various television shows.",
"# License\nThis dataset and the accompanying code are released under the MIT License. The contents of the data are collected from URL as per the repo code, and no ownership or rights are claimed over the data.",
"# Disclaimer\nThis dataset is intended for research and educational purposes.",
"# Contributions\nContributions are welcome! Feel free to submit issues or pull requests on the HuggingFace repository."
] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-mit #television #roleplaying #region-us \n",
"# wikiquote_tv\n\n This README was written by GPT-4.",
"# Overview\nThis dataset, gathered from URL, contains a comprehensive collection of quotes, actions, and conversations from various television shows. The dataset is particularly useful for training language models, offering a rich source of dialogues and narrative structures.",
"# Dataset Description\n- Content: Quotes, actions, and conversations from a range of TV shows.\n- Source: Extracted from URL.\n- Structure: The dataset includes data classes for 'Action', 'Quote', and 'Conversation', encapsulating individual elements of dialogues.",
"# Features\n- Parse Functionality: Extracts quotes and actions from the HTML content of URL pages.\n- Comprehensive Coverage: Includes shows across a wide range of genres and time periods.\n- Customizable: Flexible to be used for a variety of NLP tasks and research.",
"# Usage\nThe dataset may be useful for:\n- Natural Language Understanding: Understanding context, humor, and character dynamics in conversations.\n- Language Modeling: Training models to generate dialogues or predict next lines in conversations.\n- Cultural Analysis: Studying trends and themes across various television shows.",
"# License\nThis dataset and the accompanying code are released under the MIT License. The contents of the data are collected from URL as per the repo code, and no ownership or rights are claimed over the data.",
"# Disclaimer\nThis dataset is intended for research and educational purposes.",
"# Contributions\nContributions are welcome! Feel free to submit issues or pull requests on the HuggingFace repository."
] | [
56,
15,
56,
69,
69,
67,
48,
14,
29
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #language-English #license-mit #television #roleplaying #region-us \n# wikiquote_tv\n\n This README was written by GPT-4.# Overview\nThis dataset, gathered from URL, contains a comprehensive collection of quotes, actions, and conversations from various television shows. The dataset is particularly useful for training language models, offering a rich source of dialogues and narrative structures.# Dataset Description\n- Content: Quotes, actions, and conversations from a range of TV shows.\n- Source: Extracted from URL.\n- Structure: The dataset includes data classes for 'Action', 'Quote', and 'Conversation', encapsulating individual elements of dialogues.# Features\n- Parse Functionality: Extracts quotes and actions from the HTML content of URL pages.\n- Comprehensive Coverage: Includes shows across a wide range of genres and time periods.\n- Customizable: Flexible to be used for a variety of NLP tasks and research.# Usage\nThe dataset may be useful for:\n- Natural Language Understanding: Understanding context, humor, and character dynamics in conversations.\n- Language Modeling: Training models to generate dialogues or predict next lines in conversations.\n- Cultural Analysis: Studying trends and themes across various television shows.# License\nThis dataset and the accompanying code are released under the MIT License. The contents of the data are collected from URL as per the repo code, and no ownership or rights are claimed over the data.# Disclaimer\nThis dataset is intended for research and educational purposes.# Contributions\nContributions are welcome! Feel free to submit issues or pull requests on the HuggingFace repository."
] |
8dd2df76a6f2f04d86f01df8c2c16fb6c04210e1 |
# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Reasoner
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hedronstone/OpenHermes-7B-Reasoner
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [hedronstone/OpenHermes-7B-Reasoner](https://huggingface.co/hedronstone/OpenHermes-7B-Reasoner) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Reasoner",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T05:33:20.920109](https://huggingface.co/datasets/open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Reasoner/blob/main/results_2023-12-11T05-33-20.920109.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6264107604791354,
"acc_stderr": 0.03244629935008131,
"acc_norm": 0.6296805420206979,
"acc_norm_stderr": 0.03308716638008267,
"mc1": 0.33047735618115054,
"mc1_stderr": 0.016466769613698296,
"mc2": 0.48821727865548903,
"mc2_stderr": 0.0150448263523402
},
"harness|arc:challenge|25": {
"acc": 0.5870307167235495,
"acc_stderr": 0.014388344935398326,
"acc_norm": 0.6313993174061433,
"acc_norm_stderr": 0.014097810678042196
},
"harness|hellaswag|10": {
"acc": 0.6386178052180841,
"acc_stderr": 0.004794191785967951,
"acc_norm": 0.8273252340171281,
"acc_norm_stderr": 0.0037719340427991577
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.0373362665538351,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.0373362665538351
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406786,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406786
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.02354079935872329,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.02354079935872329
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.0274796030105388,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.0274796030105388
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709447,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709447
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6333333333333333,
"acc_stderr": 0.02443301646605246,
"acc_norm": 0.6333333333333333,
"acc_norm_stderr": 0.02443301646605246
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188705,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188705
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639325,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639325
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477518,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477518
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281382,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281382
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8160919540229885,
"acc_stderr": 0.013853724170922524,
"acc_norm": 0.8160919540229885,
"acc_norm_stderr": 0.013853724170922524
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677003,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677003
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808842,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808842
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153266,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153266
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.025089478523765137,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.025089478523765137
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4432624113475177,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.4432624113475177,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4654498044328553,
"acc_stderr": 0.012739711554045704,
"acc_norm": 0.4654498044328553,
"acc_norm_stderr": 0.012739711554045704
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6507352941176471,
"acc_stderr": 0.028959755196824866,
"acc_norm": 0.6507352941176471,
"acc_norm_stderr": 0.028959755196824866
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.01924978569171721,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.01924978569171721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274645,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274645
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623326,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623326
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33047735618115054,
"mc1_stderr": 0.016466769613698296,
"mc2": 0.48821727865548903,
"mc2_stderr": 0.0150448263523402
},
"harness|winogrande|5": {
"acc": 0.7584846093133386,
"acc_stderr": 0.012028983782011879
},
"harness|gsm8k|5": {
"acc": 0.5344958301743745,
"acc_stderr": 0.013739668147545915
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Reasoner | [
"region:us"
] | 2023-12-11T05:34:34+00:00 | {"pretty_name": "Evaluation run of hedronstone/OpenHermes-7B-Reasoner", "dataset_summary": "Dataset automatically created during the evaluation run of model [hedronstone/OpenHermes-7B-Reasoner](https://huggingface.co/hedronstone/OpenHermes-7B-Reasoner) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Reasoner\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T05:33:20.920109](https://huggingface.co/datasets/open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Reasoner/blob/main/results_2023-12-11T05-33-20.920109.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6264107604791354,\n \"acc_stderr\": 0.03244629935008131,\n \"acc_norm\": 0.6296805420206979,\n \"acc_norm_stderr\": 0.03308716638008267,\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.48821727865548903,\n \"mc2_stderr\": 0.0150448263523402\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5870307167235495,\n \"acc_stderr\": 0.014388344935398326,\n \"acc_norm\": 0.6313993174061433,\n \"acc_norm_stderr\": 0.014097810678042196\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6386178052180841,\n \"acc_stderr\": 0.004794191785967951,\n \"acc_norm\": 0.8273252340171281,\n \"acc_norm_stderr\": 0.0037719340427991577\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340355,\n \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340355\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406786,\n \"acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406786\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.02354079935872329,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.02354079935872329\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.0274796030105388,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.0274796030105388\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709447,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709447\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6333333333333333,\n \"acc_stderr\": 0.02443301646605246,\n \"acc_norm\": 0.6333333333333333,\n \"acc_norm_stderr\": 0.02443301646605246\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188705,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188705\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477518,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477518\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281382,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281382\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8160919540229885,\n \"acc_stderr\": 0.013853724170922524,\n \"acc_norm\": 0.8160919540229885,\n \"acc_norm_stderr\": 0.013853724170922524\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388677003,\n \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388677003\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808842,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808842\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n \"acc_stderr\": 0.026236965881153266,\n \"acc_norm\": 0.6913183279742765,\n \"acc_norm_stderr\": 0.026236965881153266\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.025089478523765137,\n \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.025089478523765137\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n \"acc_stderr\": 0.012739711554045704,\n \"acc_norm\": 0.4654498044328553,\n \"acc_norm_stderr\": 0.012739711554045704\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.028959755196824866,\n \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.028959755196824866\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.01924978569171721,\n \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.01924978569171721\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274645,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274645\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623326,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623326\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.48821727865548903,\n \"mc2_stderr\": 0.0150448263523402\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7584846093133386,\n \"acc_stderr\": 0.012028983782011879\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5344958301743745,\n \"acc_stderr\": 0.013739668147545915\n }\n}\n```", "repo_url": "https://huggingface.co/hedronstone/OpenHermes-7B-Reasoner", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|arc:challenge|25_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|arc:challenge|25_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|gsm8k|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|gsm8k|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hellaswag|10_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hellaswag|10_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T05-31-40.703795.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T05-33-20.920109.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["**/details_harness|winogrande|5_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["**/details_harness|winogrande|5_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T05-33-20.920109.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T05_31_40.703795", "path": ["results_2023-12-11T05-31-40.703795.parquet"]}, {"split": "2023_12_11T05_33_20.920109", "path": ["results_2023-12-11T05-33-20.920109.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T05-33-20.920109.parquet"]}]}]} | 2023-12-11T05:36:30+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Reasoner
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Reasoner on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T05:33:20.920109(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Reasoner",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Reasoner on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T05:33:20.920109(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Reasoner",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Reasoner on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T05:33:20.920109(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Reasoner## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Reasoner on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T05:33:20.920109(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
00f4e39b810474e7a875065e8ded78450daccf67 |
# Dataset of Kaneshiya Shitara
This is the dataset of Kaneshiya Shitara, containing 309 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 309 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 675 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 776 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 309 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 309 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 309 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 675 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 675 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 539 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 776 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 776 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/kaneshiya_shitara_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T05:50:59+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T06:00:19+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Kaneshiya Shitara
============================
This is the dataset of Kaneshiya Shitara, containing 309 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
7272411d64eb1bfe2e916867d8849fec42b9f767 |
# First Generation Pokemon images dataset
Based on the Kaggle dataset [Pokemon Images, First Generation(17000 files)](https://www.kaggle.com/datasets/mikoajkolman/pokemon-images-first-generation17000-files).
There are 143 directories for each Pokemon from the first generation except for Nidoran male and female.
Each directory containing from 50 to 200 images from the considered pokemon. | Dusduo/1stGen-Pokemon-Images | [
"region:us"
] | 2023-12-11T05:53:35+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Abra", "1": "Aerodactyl", "2": "Alakazam", "3": "Arbok", "4": "Arcanine", "5": "Articuno", "6": "Beedrill", "7": "Bellsprout", "8": "Blastoise", "9": "Bulbasaur", "10": "Butterfree", "11": "Caterpie", "12": "Chansey", "13": "Charizard", "14": "Charmander", "15": "Charmeleon", "16": "Clefable", "17": "Clefairy", "18": "Cloyster", "19": "Cubone", "20": "Dewgong", "21": "Diglett", "22": "Ditto", "23": "Dodrio", "24": "Doduo", "25": "Dragonair", "26": "Dragonite", "27": "Dratini", "28": "Drowzee", "29": "Dugtrio", "30": "Eevee", "31": "Ekans", "32": "Electabuzz", "33": "Electrode", "34": "Exeggcute", "35": "Exeggutor", "36": "Farfetchd", "37": "Fearow", "38": "Flareon", "39": "Gastly", "40": "Gengar", "41": "Geodude", "42": "Gloom", "43": "Golbat", "44": "Goldeen", "45": "Golduck", "46": "Graveler", "47": "Grimer", "48": "Growlithe", "49": "Gyarados", "50": "Haunter", "51": "Hitmonchan", "52": "Hitmonlee", "53": "Horsea", "54": "Hypno", "55": "Ivysaur", "56": "Jigglypuff", "57": "Jolteon", "58": "Jynx", "59": "Kabutops", "60": "Kadabra", "61": "Kakuna", "62": "Kangaskhan", "63": "Kingler", "64": "Koffing", "65": "Lapras", "66": "Lickitung", "67": "Machamp", "68": "Machoke", "69": "Machop", "70": "Magikarp", "71": "Magmar", "72": "Magnemite", "73": "Magneton", "74": "Mankey", "75": "Marowak", "76": "Meowth", "77": "Metapod", "78": "Mew", "79": "Mewtwo", "80": "Moltres", "81": "Mr. Mime", "82": "MrMime", "83": "Nidoking", "84": "Nidoqueen", "85": "Nidorina", "86": "Nidorino", "87": "Ninetales", "88": "Oddish", "89": "Omanyte", "90": "Omastar", "91": "Parasect", "92": "Pidgeot", "93": "Pidgeotto", "94": "Pidgey", "95": "Pikachu", "96": "Pinsir", "97": "Poliwag", "98": "Poliwhirl", "99": "Poliwrath", "100": "Ponyta", "101": "Porygon", "102": "Primeape", "103": "Psyduck", "104": "Raichu", "105": "Rapidash", "106": "Raticate", "107": "Rattata", "108": "Rhydon", "109": "Rhyhorn", "110": "Sandshrew", "111": "Sandslash", "112": "Scyther", "113": "Seadra", "114": "Seaking", "115": "Seel", "116": "Shellder", "117": "Slowbro", "118": "Slowpoke", "119": "Snorlax", "120": "Spearow", "121": "Squirtle", "122": "Starmie", "123": "Staryu", "124": "Tangela", "125": "Tauros", "126": "Tentacool", "127": "Tentacruel", "128": "Vaporeon", "129": "Venomoth", "130": "Venonat", "131": "Venusaur", "132": "Victreebel", "133": "Vileplume", "134": "Voltorb", "135": "Vulpix", "136": "Wartortle", "137": "Weedle", "138": "Weepinbell", "139": "Weezing", "140": "Wigglytuff", "141": "Zapdos", "142": "Zubat"}}}}], "splits": [{"name": "train", "num_bytes": 1594428375.04, "num_examples": 8440}, {"name": "test", "num_bytes": 366693088.45, "num_examples": 2110}], "download_size": 1928731574, "dataset_size": 1961121463.49}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-17T10:32:22+00:00 | [] | [] | TAGS
#region-us
|
# First Generation Pokemon images dataset
Based on the Kaggle dataset Pokemon Images, First Generation(17000 files).
There are 143 directories for each Pokemon from the first generation except for Nidoran male and female.
Each directory containing from 50 to 200 images from the considered pokemon. | [
"# First Generation Pokemon images dataset\n\nBased on the Kaggle dataset Pokemon Images, First Generation(17000 files).\n\nThere are 143 directories for each Pokemon from the first generation except for Nidoran male and female. \nEach directory containing from 50 to 200 images from the considered pokemon."
] | [
"TAGS\n#region-us \n",
"# First Generation Pokemon images dataset\n\nBased on the Kaggle dataset Pokemon Images, First Generation(17000 files).\n\nThere are 143 directories for each Pokemon from the first generation except for Nidoran male and female. \nEach directory containing from 50 to 200 images from the considered pokemon."
] | [
6,
60
] | [
"passage: TAGS\n#region-us \n# First Generation Pokemon images dataset\n\nBased on the Kaggle dataset Pokemon Images, First Generation(17000 files).\n\nThere are 143 directories for each Pokemon from the first generation except for Nidoran male and female. \nEach directory containing from 50 to 200 images from the considered pokemon."
] |
306d05cd590eae1875f23a078a0699b439e59a40 |
데이터셋 dialogsum를 한영 번역 모델인 Gugugo-koen를 이용하여 번역함.
이 데이터에 대한 모든 권리는 원작자인 knkarthick에게 있음.
원본 데이터 dialogsum
https://huggingface.co/datasets/knkarthick/dialogsum
사용한 번역 모델 Gugugo-koen
https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ | hankang2023/dialogsum.kr | [
"task_categories:conversational",
"region:us"
] | 2023-12-11T06:07:18+00:00 | {"task_categories": ["conversational"]} | 2023-12-20T00:00:46+00:00 | [] | [] | TAGS
#task_categories-conversational #region-us
|
데이터셋 dialogsum를 한영 번역 모델인 Gugugo-koen를 이용하여 번역함.
이 데이터에 대한 모든 권리는 원작자인 knkarthick에게 있음.
원본 데이터 dialogsum
URL
사용한 번역 모델 Gugugo-koen
URL | [] | [
"TAGS\n#task_categories-conversational #region-us \n"
] | [
16
] | [
"passage: TAGS\n#task_categories-conversational #region-us \n"
] |
6ca0162dce8147a1c4c97acc3e95f510c1995040 | # Dataset Card for "lean-dojo-mathlib4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | JohnYang88/lean-dojo-mathlib4 | [
"region:us"
] | 2023-12-11T06:17:54+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "commit", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "full_name", "dtype": "string"}, {"name": "start", "sequence": "int64"}, {"name": "end", "sequence": "int64"}, {"name": "traced_tactics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 320023872, "num_examples": 98514}, {"name": "test", "num_bytes": 6116916, "num_examples": 2000}, {"name": "validation", "num_bytes": 7228697, "num_examples": 2000}], "download_size": 54194769, "dataset_size": 333369485}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-11T06:54:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lean-dojo-mathlib4"
More Information needed | [
"# Dataset Card for \"lean-dojo-mathlib4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lean-dojo-mathlib4\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"lean-dojo-mathlib4\"\n\nMore Information needed"
] |
79c1e8e5c46d33647bd8b3b3a29191f0b810dd0c |
# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Symbolic
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hedronstone/OpenHermes-7B-Symbolic
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [hedronstone/OpenHermes-7B-Symbolic](https://huggingface.co/hedronstone/OpenHermes-7B-Symbolic) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Symbolic",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T06:22:23.753929](https://huggingface.co/datasets/open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Symbolic/blob/main/results_2023-12-11T06-22-23.753929.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6264107604791354,
"acc_stderr": 0.03244629935008131,
"acc_norm": 0.6296805420206979,
"acc_norm_stderr": 0.03308716638008267,
"mc1": 0.33047735618115054,
"mc1_stderr": 0.016466769613698296,
"mc2": 0.48821727865548903,
"mc2_stderr": 0.0150448263523402
},
"harness|arc:challenge|25": {
"acc": 0.5870307167235495,
"acc_stderr": 0.014388344935398326,
"acc_norm": 0.6313993174061433,
"acc_norm_stderr": 0.014097810678042196
},
"harness|hellaswag|10": {
"acc": 0.6386178052180841,
"acc_stderr": 0.004794191785967951,
"acc_norm": 0.8273252340171281,
"acc_norm_stderr": 0.0037719340427991577
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.028450154794118637,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.028450154794118637
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.0373362665538351,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.0373362665538351
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.048971049527263666,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.048971049527263666
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406786,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406786
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.02354079935872329,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.02354079935872329
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.0274796030105388,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.0274796030105388
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709447,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709447
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6333333333333333,
"acc_stderr": 0.02443301646605246,
"acc_norm": 0.6333333333333333,
"acc_norm_stderr": 0.02443301646605246
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.29259259259259257,
"acc_stderr": 0.027738969632176088,
"acc_norm": 0.29259259259259257,
"acc_norm_stderr": 0.027738969632176088
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188705,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188705
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639325,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639325
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477518,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477518
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281382,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281382
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8160919540229885,
"acc_stderr": 0.013853724170922524,
"acc_norm": 0.8160919540229885,
"acc_norm_stderr": 0.013853724170922524
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677003,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677003
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808842,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808842
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153266,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153266
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.025089478523765137,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.025089478523765137
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4432624113475177,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.4432624113475177,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4654498044328553,
"acc_stderr": 0.012739711554045704,
"acc_norm": 0.4654498044328553,
"acc_norm_stderr": 0.012739711554045704
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6507352941176471,
"acc_stderr": 0.028959755196824866,
"acc_norm": 0.6507352941176471,
"acc_norm_stderr": 0.028959755196824866
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.01924978569171721,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.01924978569171721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274645,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274645
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623326,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623326
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.02991312723236804,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.02991312723236804
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33047735618115054,
"mc1_stderr": 0.016466769613698296,
"mc2": 0.48821727865548903,
"mc2_stderr": 0.0150448263523402
},
"harness|winogrande|5": {
"acc": 0.7584846093133386,
"acc_stderr": 0.012028983782011879
},
"harness|gsm8k|5": {
"acc": 0.5344958301743745,
"acc_stderr": 0.013739668147545915
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Symbolic | [
"region:us"
] | 2023-12-11T06:25:15+00:00 | {"pretty_name": "Evaluation run of hedronstone/OpenHermes-7B-Symbolic", "dataset_summary": "Dataset automatically created during the evaluation run of model [hedronstone/OpenHermes-7B-Symbolic](https://huggingface.co/hedronstone/OpenHermes-7B-Symbolic) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Symbolic\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T06:22:23.753929](https://huggingface.co/datasets/open-llm-leaderboard/details_hedronstone__OpenHermes-7B-Symbolic/blob/main/results_2023-12-11T06-22-23.753929.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6264107604791354,\n \"acc_stderr\": 0.03244629935008131,\n \"acc_norm\": 0.6296805420206979,\n \"acc_norm_stderr\": 0.03308716638008267,\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.48821727865548903,\n \"mc2_stderr\": 0.0150448263523402\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5870307167235495,\n \"acc_stderr\": 0.014388344935398326,\n \"acc_norm\": 0.6313993174061433,\n \"acc_norm_stderr\": 0.014097810678042196\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6386178052180841,\n \"acc_stderr\": 0.004794191785967951,\n \"acc_norm\": 0.8273252340171281,\n \"acc_norm_stderr\": 0.0037719340427991577\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340355,\n \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340355\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406786,\n \"acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406786\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.02354079935872329,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.02354079935872329\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.0274796030105388,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.0274796030105388\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709447,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709447\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6333333333333333,\n \"acc_stderr\": 0.02443301646605246,\n \"acc_norm\": 0.6333333333333333,\n \"acc_norm_stderr\": 0.02443301646605246\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176088,\n \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176088\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188705,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188705\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477518,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477518\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281382,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281382\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8160919540229885,\n \"acc_stderr\": 0.013853724170922524,\n \"acc_norm\": 0.8160919540229885,\n \"acc_norm_stderr\": 0.013853724170922524\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388677003,\n \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388677003\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808842,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808842\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n \"acc_stderr\": 0.026236965881153266,\n \"acc_norm\": 0.6913183279742765,\n \"acc_norm_stderr\": 0.026236965881153266\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.025089478523765137,\n \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.025089478523765137\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n \"acc_stderr\": 0.012739711554045704,\n \"acc_norm\": 0.4654498044328553,\n \"acc_norm_stderr\": 0.012739711554045704\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.028959755196824866,\n \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.028959755196824866\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.01924978569171721,\n \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.01924978569171721\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274645,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274645\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623326,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623326\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.48821727865548903,\n \"mc2_stderr\": 0.0150448263523402\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7584846093133386,\n \"acc_stderr\": 0.012028983782011879\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5344958301743745,\n \"acc_stderr\": 0.013739668147545915\n }\n}\n```", "repo_url": "https://huggingface.co/hedronstone/OpenHermes-7B-Symbolic", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-22-23.753929.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["**/details_harness|winogrande|5_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T06-22-23.753929.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T06_22_23.753929", "path": ["results_2023-12-11T06-22-23.753929.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T06-22-23.753929.parquet"]}]}]} | 2023-12-11T06:25:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Symbolic
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Symbolic on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T06:22:23.753929(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Symbolic",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Symbolic on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:22:23.753929(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Symbolic",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Symbolic on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:22:23.753929(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
22,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of hedronstone/OpenHermes-7B-Symbolic## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model hedronstone/OpenHermes-7B-Symbolic on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T06:22:23.753929(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
01f0d7baac9307a53e572c19e99a0c9f1719a550 |
# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_stabilityai__japanese-stablelm-base-gamma-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T06:27:25.240845](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__japanese-stablelm-base-gamma-7b/blob/main/results_2023-12-11T06-27-25.240845.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5437023455976233,
"acc_stderr": 0.033871507196197255,
"acc_norm": 0.5505601573345996,
"acc_norm_stderr": 0.03461966094765279,
"mc1": 0.2741738066095471,
"mc1_stderr": 0.015616518497219371,
"mc2": 0.4119583674563438,
"mc2_stderr": 0.014060895458246502
},
"harness|arc:challenge|25": {
"acc": 0.4658703071672355,
"acc_stderr": 0.014577311315231104,
"acc_norm": 0.5034129692832765,
"acc_norm_stderr": 0.014611050403244077
},
"harness|hellaswag|10": {
"acc": 0.5773750248954391,
"acc_stderr": 0.004929672777184318,
"acc_norm": 0.7747460665206134,
"acc_norm_stderr": 0.0041689568322017196
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5037037037037037,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.5037037037037037,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.04063302731486671,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.04063302731486671
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5811320754716981,
"acc_stderr": 0.030365050829115208,
"acc_norm": 0.5811320754716981,
"acc_norm_stderr": 0.030365050829115208
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04122728707651282,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04122728707651282
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5086705202312138,
"acc_stderr": 0.0381189098894041,
"acc_norm": 0.5086705202312138,
"acc_norm_stderr": 0.0381189098894041
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808778,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808778
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4723404255319149,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.4723404255319149,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.04514496132873634,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.04514496132873634
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.36243386243386244,
"acc_stderr": 0.024757473902752045,
"acc_norm": 0.36243386243386244,
"acc_norm_stderr": 0.024757473902752045
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04216370213557835,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04216370213557835
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.632258064516129,
"acc_stderr": 0.02743086657997347,
"acc_norm": 0.632258064516129,
"acc_norm_stderr": 0.02743086657997347
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4433497536945813,
"acc_stderr": 0.03495334582162934,
"acc_norm": 0.4433497536945813,
"acc_norm_stderr": 0.03495334582162934
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6848484848484848,
"acc_stderr": 0.0362773057502241,
"acc_norm": 0.6848484848484848,
"acc_norm_stderr": 0.0362773057502241
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6868686868686869,
"acc_stderr": 0.033042050878136525,
"acc_norm": 0.6868686868686869,
"acc_norm_stderr": 0.033042050878136525
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7512953367875648,
"acc_stderr": 0.031195840877700286,
"acc_norm": 0.7512953367875648,
"acc_norm_stderr": 0.031195840877700286
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4948717948717949,
"acc_stderr": 0.02534967290683866,
"acc_norm": 0.4948717948717949,
"acc_norm_stderr": 0.02534967290683866
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5252100840336135,
"acc_stderr": 0.03243718055137411,
"acc_norm": 0.5252100840336135,
"acc_norm_stderr": 0.03243718055137411
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7064220183486238,
"acc_stderr": 0.019525151122639667,
"acc_norm": 0.7064220183486238,
"acc_norm_stderr": 0.019525151122639667
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4351851851851852,
"acc_stderr": 0.03381200005643525,
"acc_norm": 0.4351851851851852,
"acc_norm_stderr": 0.03381200005643525
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.03132179803083291,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.03132179803083291
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7383966244725738,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.7383966244725738,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6457399103139013,
"acc_stderr": 0.032100621541349864,
"acc_norm": 0.6457399103139013,
"acc_norm_stderr": 0.032100621541349864
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6183206106870229,
"acc_stderr": 0.0426073515764456,
"acc_norm": 0.6183206106870229,
"acc_norm_stderr": 0.0426073515764456
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.04026187527591205,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.04026187527591205
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.0478034362693679,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.0478034362693679
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6625766871165644,
"acc_stderr": 0.037149084099355745,
"acc_norm": 0.6625766871165644,
"acc_norm_stderr": 0.037149084099355745
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.0457237235873743,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.0457237235873743
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7991452991452992,
"acc_stderr": 0.02624677294689049,
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.02624677294689049
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7369093231162197,
"acc_stderr": 0.015745497169049057,
"acc_norm": 0.7369093231162197,
"acc_norm_stderr": 0.015745497169049057
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6098265895953757,
"acc_stderr": 0.026261677607806636,
"acc_norm": 0.6098265895953757,
"acc_norm_stderr": 0.026261677607806636
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.29720670391061454,
"acc_stderr": 0.015285313353641602,
"acc_norm": 0.29720670391061454,
"acc_norm_stderr": 0.015285313353641602
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.027826109307283693,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.027826109307283693
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6463022508038585,
"acc_stderr": 0.027155208103200868,
"acc_norm": 0.6463022508038585,
"acc_norm_stderr": 0.027155208103200868
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6512345679012346,
"acc_stderr": 0.026517597724465013,
"acc_norm": 0.6512345679012346,
"acc_norm_stderr": 0.026517597724465013
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3748370273794003,
"acc_stderr": 0.01236365246755193,
"acc_norm": 0.3748370273794003,
"acc_norm_stderr": 0.01236365246755193
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5772058823529411,
"acc_stderr": 0.030008562845003472,
"acc_norm": 0.5772058823529411,
"acc_norm_stderr": 0.030008562845003472
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5522875816993464,
"acc_stderr": 0.02011692534742242,
"acc_norm": 0.5522875816993464,
"acc_norm_stderr": 0.02011692534742242
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5636363636363636,
"acc_stderr": 0.04750185058907297,
"acc_norm": 0.5636363636363636,
"acc_norm_stderr": 0.04750185058907297
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6285714285714286,
"acc_stderr": 0.03093285879278986,
"acc_norm": 0.6285714285714286,
"acc_norm_stderr": 0.03093285879278986
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7512437810945274,
"acc_stderr": 0.030567675938916714,
"acc_norm": 0.7512437810945274,
"acc_norm_stderr": 0.030567675938916714
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7426900584795322,
"acc_stderr": 0.03352799844161865,
"acc_norm": 0.7426900584795322,
"acc_norm_stderr": 0.03352799844161865
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2741738066095471,
"mc1_stderr": 0.015616518497219371,
"mc2": 0.4119583674563438,
"mc2_stderr": 0.014060895458246502
},
"harness|winogrande|5": {
"acc": 0.739542225730071,
"acc_stderr": 0.01233483367199829
},
"harness|gsm8k|5": {
"acc": 0.17816527672479152,
"acc_stderr": 0.01054013252754947
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_stabilityai__japanese-stablelm-base-gamma-7b | [
"region:us"
] | 2023-12-11T06:30:16+00:00 | {"pretty_name": "Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__japanese-stablelm-base-gamma-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T06:27:25.240845](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__japanese-stablelm-base-gamma-7b/blob/main/results_2023-12-11T06-27-25.240845.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5437023455976233,\n \"acc_stderr\": 0.033871507196197255,\n \"acc_norm\": 0.5505601573345996,\n \"acc_norm_stderr\": 0.03461966094765279,\n \"mc1\": 0.2741738066095471,\n \"mc1_stderr\": 0.015616518497219371,\n \"mc2\": 0.4119583674563438,\n \"mc2_stderr\": 0.014060895458246502\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4658703071672355,\n \"acc_stderr\": 0.014577311315231104,\n \"acc_norm\": 0.5034129692832765,\n \"acc_norm_stderr\": 0.014611050403244077\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5773750248954391,\n \"acc_stderr\": 0.004929672777184318,\n \"acc_norm\": 0.7747460665206134,\n \"acc_norm_stderr\": 0.0041689568322017196\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.04063302731486671,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.04063302731486671\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5811320754716981,\n \"acc_stderr\": 0.030365050829115208,\n \"acc_norm\": 0.5811320754716981,\n \"acc_norm_stderr\": 0.030365050829115208\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04122728707651282,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04122728707651282\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5086705202312138,\n \"acc_stderr\": 0.0381189098894041,\n \"acc_norm\": 0.5086705202312138,\n \"acc_norm_stderr\": 0.0381189098894041\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4723404255319149,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.4723404255319149,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n \"acc_stderr\": 0.04514496132873634,\n \"acc_norm\": 0.35964912280701755,\n \"acc_norm_stderr\": 0.04514496132873634\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36243386243386244,\n \"acc_stderr\": 0.024757473902752045,\n \"acc_norm\": 0.36243386243386244,\n \"acc_norm_stderr\": 0.024757473902752045\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04216370213557835,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04216370213557835\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.632258064516129,\n \"acc_stderr\": 0.02743086657997347,\n \"acc_norm\": 0.632258064516129,\n \"acc_norm_stderr\": 0.02743086657997347\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4433497536945813,\n \"acc_stderr\": 0.03495334582162934,\n \"acc_norm\": 0.4433497536945813,\n \"acc_norm_stderr\": 0.03495334582162934\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6848484848484848,\n \"acc_stderr\": 0.0362773057502241,\n \"acc_norm\": 0.6848484848484848,\n \"acc_norm_stderr\": 0.0362773057502241\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6868686868686869,\n \"acc_stderr\": 0.033042050878136525,\n \"acc_norm\": 0.6868686868686869,\n \"acc_norm_stderr\": 0.033042050878136525\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7512953367875648,\n \"acc_stderr\": 0.031195840877700286,\n \"acc_norm\": 0.7512953367875648,\n \"acc_norm_stderr\": 0.031195840877700286\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4948717948717949,\n \"acc_stderr\": 0.02534967290683866,\n \"acc_norm\": 0.4948717948717949,\n \"acc_norm_stderr\": 0.02534967290683866\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5252100840336135,\n \"acc_stderr\": 0.03243718055137411,\n \"acc_norm\": 0.5252100840336135,\n \"acc_norm_stderr\": 0.03243718055137411\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7064220183486238,\n \"acc_stderr\": 0.019525151122639667,\n \"acc_norm\": 0.7064220183486238,\n \"acc_norm_stderr\": 0.019525151122639667\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.03381200005643525,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.03381200005643525\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.03132179803083291,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.03132179803083291\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7383966244725738,\n \"acc_stderr\": 0.028609516716994934,\n \"acc_norm\": 0.7383966244725738,\n \"acc_norm_stderr\": 0.028609516716994934\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6183206106870229,\n \"acc_stderr\": 0.0426073515764456,\n \"acc_norm\": 0.6183206106870229,\n \"acc_norm_stderr\": 0.0426073515764456\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.0478034362693679,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.0478034362693679\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.037149084099355745,\n \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.037149084099355745\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n \"acc_stderr\": 0.0457237235873743,\n \"acc_norm\": 0.36607142857142855,\n \"acc_norm_stderr\": 0.0457237235873743\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7991452991452992,\n \"acc_stderr\": 0.02624677294689049,\n \"acc_norm\": 0.7991452991452992,\n \"acc_norm_stderr\": 0.02624677294689049\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7369093231162197,\n \"acc_stderr\": 0.015745497169049057,\n \"acc_norm\": 0.7369093231162197,\n \"acc_norm_stderr\": 0.015745497169049057\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6098265895953757,\n \"acc_stderr\": 0.026261677607806636,\n \"acc_norm\": 0.6098265895953757,\n \"acc_norm_stderr\": 0.026261677607806636\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.29720670391061454,\n \"acc_stderr\": 0.015285313353641602,\n \"acc_norm\": 0.29720670391061454,\n \"acc_norm_stderr\": 0.015285313353641602\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.027826109307283693,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.027826109307283693\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6463022508038585,\n \"acc_stderr\": 0.027155208103200868,\n \"acc_norm\": 0.6463022508038585,\n \"acc_norm_stderr\": 0.027155208103200868\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6512345679012346,\n \"acc_stderr\": 0.026517597724465013,\n \"acc_norm\": 0.6512345679012346,\n \"acc_norm_stderr\": 0.026517597724465013\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3748370273794003,\n \"acc_stderr\": 0.01236365246755193,\n \"acc_norm\": 0.3748370273794003,\n \"acc_norm_stderr\": 0.01236365246755193\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5772058823529411,\n \"acc_stderr\": 0.030008562845003472,\n \"acc_norm\": 0.5772058823529411,\n \"acc_norm_stderr\": 0.030008562845003472\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5522875816993464,\n \"acc_stderr\": 0.02011692534742242,\n \"acc_norm\": 0.5522875816993464,\n \"acc_norm_stderr\": 0.02011692534742242\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5636363636363636,\n \"acc_stderr\": 0.04750185058907297,\n \"acc_norm\": 0.5636363636363636,\n \"acc_norm_stderr\": 0.04750185058907297\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6285714285714286,\n \"acc_stderr\": 0.03093285879278986,\n \"acc_norm\": 0.6285714285714286,\n \"acc_norm_stderr\": 0.03093285879278986\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7512437810945274,\n \"acc_stderr\": 0.030567675938916714,\n \"acc_norm\": 0.7512437810945274,\n \"acc_norm_stderr\": 0.030567675938916714\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7426900584795322,\n \"acc_stderr\": 0.03352799844161865,\n \"acc_norm\": 0.7426900584795322,\n \"acc_norm_stderr\": 0.03352799844161865\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2741738066095471,\n \"mc1_stderr\": 0.015616518497219371,\n \"mc2\": 0.4119583674563438,\n \"mc2_stderr\": 0.014060895458246502\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.739542225730071,\n \"acc_stderr\": 0.01233483367199829\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17816527672479152,\n \"acc_stderr\": 0.01054013252754947\n }\n}\n```", "repo_url": "https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-27-25.240845.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["**/details_harness|winogrande|5_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T06-27-25.240845.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T06_27_25.240845", "path": ["results_2023-12-11T06-27-25.240845.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T06-27-25.240845.parquet"]}]}]} | 2023-12-11T06:30:59+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-base-gamma-7b on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T06:27:25.240845(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-base-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:27:25.240845(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-base-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:27:25.240845(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-base-gamma-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-base-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T06:27:25.240845(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
59d718eea7861141e4584045c9891900fa2cf411 |
영문 데이터셋 summarize_from_feedback을 한영 번역 모델인 Gugugo-koen를 이용하여 번역함.
원본은 batch3.json에서 batch22.json까지 있지만, 시간 관계상 batch9.json까지만 작업하고 중지함. 모든 데이터가 필요한 분은 원본을 참고해서 후속 작업 요망.
summarize_from_feedback : https://huggingface.co/datasets/openai/summarize_from_feedback
Gugugo-koen : https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ | hankang2023/summarize_from_feedback.kr | [
"region:us"
] | 2023-12-11T06:31:05+00:00 | {} | 2023-12-11T06:43:33+00:00 | [] | [] | TAGS
#region-us
|
영문 데이터셋 summarize_from_feedback을 한영 번역 모델인 Gugugo-koen를 이용하여 번역함.
원본은 batch3.json에서 batch22.json까지 있지만, 시간 관계상 batch9.json까지만 작업하고 중지함. 모든 데이터가 필요한 분은 원본을 참고해서 후속 작업 요망.
summarize_from_feedback : URL
Gugugo-koen : URL | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
5be252c7043430b09ad52f4444b253b2b896d815 |
# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [stabilityai/japanese-stablelm-instruct-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_stabilityai__japanese-stablelm-instruct-gamma-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T06:30:10.836687](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__japanese-stablelm-instruct-gamma-7b/blob/main/results_2023-12-11T06-30-10.836687.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5449847674493294,
"acc_stderr": 0.03393393668520809,
"acc_norm": 0.5515719011338248,
"acc_norm_stderr": 0.03467665306933469,
"mc1": 0.2631578947368421,
"mc1_stderr": 0.015415241740237015,
"mc2": 0.3977280700455547,
"mc2_stderr": 0.013876577048055314
},
"harness|arc:challenge|25": {
"acc": 0.4786689419795222,
"acc_stderr": 0.014598087973127108,
"acc_norm": 0.5068259385665529,
"acc_norm_stderr": 0.014610029151379813
},
"harness|hellaswag|10": {
"acc": 0.5865365465046803,
"acc_stderr": 0.0049144805345337095,
"acc_norm": 0.7867954590718981,
"acc_norm_stderr": 0.00408733904510631
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.04316378599511324,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.04316378599511324
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5328947368421053,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.5328947368421053,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.569811320754717,
"acc_stderr": 0.030471445867183235,
"acc_norm": 0.569811320754717,
"acc_norm_stderr": 0.030471445867183235
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04122728707651282,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04122728707651282
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5144508670520231,
"acc_stderr": 0.03810871630454764,
"acc_norm": 0.5144508670520231,
"acc_norm_stderr": 0.03810871630454764
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808778,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808778
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.024870815251057082,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.024870815251057082
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.043435254289490965,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.043435254289490965
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6193548387096774,
"acc_stderr": 0.027621717832907032,
"acc_norm": 0.6193548387096774,
"acc_norm_stderr": 0.027621717832907032
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.43349753694581283,
"acc_stderr": 0.034867317274198714,
"acc_norm": 0.43349753694581283,
"acc_norm_stderr": 0.034867317274198714
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6848484848484848,
"acc_stderr": 0.0362773057502241,
"acc_norm": 0.6848484848484848,
"acc_norm_stderr": 0.0362773057502241
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6868686868686869,
"acc_stderr": 0.033042050878136525,
"acc_norm": 0.6868686868686869,
"acc_norm_stderr": 0.033042050878136525
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7979274611398963,
"acc_stderr": 0.028979089794296732,
"acc_norm": 0.7979274611398963,
"acc_norm_stderr": 0.028979089794296732
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5076923076923077,
"acc_stderr": 0.02534800603153476,
"acc_norm": 0.5076923076923077,
"acc_norm_stderr": 0.02534800603153476
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.28888888888888886,
"acc_stderr": 0.027634907264178544,
"acc_norm": 0.28888888888888886,
"acc_norm_stderr": 0.027634907264178544
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.542016806722689,
"acc_stderr": 0.03236361111951941,
"acc_norm": 0.542016806722689,
"acc_norm_stderr": 0.03236361111951941
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7247706422018348,
"acc_stderr": 0.019149093743155203,
"acc_norm": 0.7247706422018348,
"acc_norm_stderr": 0.019149093743155203
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.03350991604696043,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.03350991604696043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7205882352941176,
"acc_stderr": 0.031493281045079556,
"acc_norm": 0.7205882352941176,
"acc_norm_stderr": 0.031493281045079556
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.029312814153955917,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.029312814153955917
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.03227790442850499,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.03227790442850499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6030534351145038,
"acc_stderr": 0.04291135671009225,
"acc_norm": 0.6030534351145038,
"acc_norm_stderr": 0.04291135671009225
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514512,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514512
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04766075165356462,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04766075165356462
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.656441717791411,
"acc_stderr": 0.037311335196738925,
"acc_norm": 0.656441717791411,
"acc_norm_stderr": 0.037311335196738925
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.045218299028335865,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.045218299028335865
},
"harness|hendrycksTest-management|5": {
"acc": 0.6601941747572816,
"acc_stderr": 0.046897659372781335,
"acc_norm": 0.6601941747572816,
"acc_norm_stderr": 0.046897659372781335
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8162393162393162,
"acc_stderr": 0.025372139671722933,
"acc_norm": 0.8162393162393162,
"acc_norm_stderr": 0.025372139671722933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.015671006009339582,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.015671006009339582
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.026152198619726796,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.026152198619726796
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31843575418994413,
"acc_stderr": 0.015581008080360276,
"acc_norm": 0.31843575418994413,
"acc_norm_stderr": 0.015581008080360276
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5915032679738562,
"acc_stderr": 0.028146405993096358,
"acc_norm": 0.5915032679738562,
"acc_norm_stderr": 0.028146405993096358
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6495176848874598,
"acc_stderr": 0.02709865262130175,
"acc_norm": 0.6495176848874598,
"acc_norm_stderr": 0.02709865262130175
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6265432098765432,
"acc_stderr": 0.02691500301138016,
"acc_norm": 0.6265432098765432,
"acc_norm_stderr": 0.02691500301138016
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.029555454236778852,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.029555454236778852
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3852672750977836,
"acc_stderr": 0.012429485434955197,
"acc_norm": 0.3852672750977836,
"acc_norm_stderr": 0.012429485434955197
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5661764705882353,
"acc_stderr": 0.03010563657001664,
"acc_norm": 0.5661764705882353,
"acc_norm_stderr": 0.03010563657001664
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.545751633986928,
"acc_stderr": 0.0201429745537952,
"acc_norm": 0.545751633986928,
"acc_norm_stderr": 0.0201429745537952
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5636363636363636,
"acc_stderr": 0.04750185058907297,
"acc_norm": 0.5636363636363636,
"acc_norm_stderr": 0.04750185058907297
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6204081632653061,
"acc_stderr": 0.031067211262872478,
"acc_norm": 0.6204081632653061,
"acc_norm_stderr": 0.031067211262872478
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7562189054726368,
"acc_stderr": 0.030360490154014638,
"acc_norm": 0.7562189054726368,
"acc_norm_stderr": 0.030360490154014638
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7543859649122807,
"acc_stderr": 0.0330140594698725,
"acc_norm": 0.7543859649122807,
"acc_norm_stderr": 0.0330140594698725
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2631578947368421,
"mc1_stderr": 0.015415241740237015,
"mc2": 0.3977280700455547,
"mc2_stderr": 0.013876577048055314
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.012370922527262006
},
"harness|gsm8k|5": {
"acc": 0.19257012888551933,
"acc_stderr": 0.010861483868509948
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_stabilityai__japanese-stablelm-instruct-gamma-7b | [
"region:us"
] | 2023-12-11T06:33:01+00:00 | {"pretty_name": "Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [stabilityai/japanese-stablelm-instruct-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__japanese-stablelm-instruct-gamma-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T06:30:10.836687](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__japanese-stablelm-instruct-gamma-7b/blob/main/results_2023-12-11T06-30-10.836687.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5449847674493294,\n \"acc_stderr\": 0.03393393668520809,\n \"acc_norm\": 0.5515719011338248,\n \"acc_norm_stderr\": 0.03467665306933469,\n \"mc1\": 0.2631578947368421,\n \"mc1_stderr\": 0.015415241740237015,\n \"mc2\": 0.3977280700455547,\n \"mc2_stderr\": 0.013876577048055314\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4786689419795222,\n \"acc_stderr\": 0.014598087973127108,\n \"acc_norm\": 0.5068259385665529,\n \"acc_norm_stderr\": 0.014610029151379813\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5865365465046803,\n \"acc_stderr\": 0.0049144805345337095,\n \"acc_norm\": 0.7867954590718981,\n \"acc_norm_stderr\": 0.00408733904510631\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.04316378599511324,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.04316378599511324\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5328947368421053,\n \"acc_stderr\": 0.040601270352363966,\n \"acc_norm\": 0.5328947368421053,\n \"acc_norm_stderr\": 0.040601270352363966\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.569811320754717,\n \"acc_stderr\": 0.030471445867183235,\n \"acc_norm\": 0.569811320754717,\n \"acc_norm_stderr\": 0.030471445867183235\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04122728707651282,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04122728707651282\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5144508670520231,\n \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.5144508670520231,\n \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.024870815251057082,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.024870815251057082\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.043435254289490965,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.043435254289490965\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6193548387096774,\n \"acc_stderr\": 0.027621717832907032,\n \"acc_norm\": 0.6193548387096774,\n \"acc_norm_stderr\": 0.027621717832907032\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.43349753694581283,\n \"acc_stderr\": 0.034867317274198714,\n \"acc_norm\": 0.43349753694581283,\n \"acc_norm_stderr\": 0.034867317274198714\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6848484848484848,\n \"acc_stderr\": 0.0362773057502241,\n \"acc_norm\": 0.6848484848484848,\n \"acc_norm_stderr\": 0.0362773057502241\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6868686868686869,\n \"acc_stderr\": 0.033042050878136525,\n \"acc_norm\": 0.6868686868686869,\n \"acc_norm_stderr\": 0.033042050878136525\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.028979089794296732,\n \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.028979089794296732\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5076923076923077,\n \"acc_stderr\": 0.02534800603153476,\n \"acc_norm\": 0.5076923076923077,\n \"acc_norm_stderr\": 0.02534800603153476\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.542016806722689,\n \"acc_stderr\": 0.03236361111951941,\n \"acc_norm\": 0.542016806722689,\n \"acc_norm_stderr\": 0.03236361111951941\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7247706422018348,\n \"acc_stderr\": 0.019149093743155203,\n \"acc_norm\": 0.7247706422018348,\n \"acc_norm_stderr\": 0.019149093743155203\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.03350991604696043,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.03350991604696043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7205882352941176,\n \"acc_stderr\": 0.031493281045079556,\n \"acc_norm\": 0.7205882352941176,\n \"acc_norm_stderr\": 0.031493281045079556\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7172995780590717,\n \"acc_stderr\": 0.029312814153955917,\n \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.029312814153955917\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n \"acc_stderr\": 0.03227790442850499,\n \"acc_norm\": 0.6367713004484304,\n \"acc_norm_stderr\": 0.03227790442850499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6030534351145038,\n \"acc_stderr\": 0.04291135671009225,\n \"acc_norm\": 0.6030534351145038,\n \"acc_norm_stderr\": 0.04291135671009225\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.71900826446281,\n \"acc_stderr\": 0.04103203830514512,\n \"acc_norm\": 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514512\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04766075165356462,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04766075165356462\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.656441717791411,\n \"acc_stderr\": 0.037311335196738925,\n \"acc_norm\": 0.656441717791411,\n \"acc_norm_stderr\": 0.037311335196738925\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n \"acc_stderr\": 0.045218299028335865,\n \"acc_norm\": 0.3482142857142857,\n \"acc_norm_stderr\": 0.045218299028335865\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6601941747572816,\n \"acc_stderr\": 0.046897659372781335,\n \"acc_norm\": 0.6601941747572816,\n \"acc_norm_stderr\": 0.046897659372781335\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8162393162393162,\n \"acc_stderr\": 0.025372139671722933,\n \"acc_norm\": 0.8162393162393162,\n \"acc_norm_stderr\": 0.025372139671722933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.015671006009339582,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.015671006009339582\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.026152198619726796,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.026152198619726796\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31843575418994413,\n \"acc_stderr\": 0.015581008080360276,\n \"acc_norm\": 0.31843575418994413,\n \"acc_norm_stderr\": 0.015581008080360276\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5915032679738562,\n \"acc_stderr\": 0.028146405993096358,\n \"acc_norm\": 0.5915032679738562,\n \"acc_norm_stderr\": 0.028146405993096358\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6495176848874598,\n \"acc_stderr\": 0.02709865262130175,\n \"acc_norm\": 0.6495176848874598,\n \"acc_norm_stderr\": 0.02709865262130175\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6265432098765432,\n \"acc_stderr\": 0.02691500301138016,\n \"acc_norm\": 0.6265432098765432,\n \"acc_norm_stderr\": 0.02691500301138016\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778852,\n \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778852\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3852672750977836,\n \"acc_stderr\": 0.012429485434955197,\n \"acc_norm\": 0.3852672750977836,\n \"acc_norm_stderr\": 0.012429485434955197\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5661764705882353,\n \"acc_stderr\": 0.03010563657001664,\n \"acc_norm\": 0.5661764705882353,\n \"acc_norm_stderr\": 0.03010563657001664\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.545751633986928,\n \"acc_stderr\": 0.0201429745537952,\n \"acc_norm\": 0.545751633986928,\n \"acc_norm_stderr\": 0.0201429745537952\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5636363636363636,\n \"acc_stderr\": 0.04750185058907297,\n \"acc_norm\": 0.5636363636363636,\n \"acc_norm_stderr\": 0.04750185058907297\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6204081632653061,\n \"acc_stderr\": 0.031067211262872478,\n \"acc_norm\": 0.6204081632653061,\n \"acc_norm_stderr\": 0.031067211262872478\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7562189054726368,\n \"acc_stderr\": 0.030360490154014638,\n \"acc_norm\": 0.7562189054726368,\n \"acc_norm_stderr\": 0.030360490154014638\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7543859649122807,\n \"acc_stderr\": 0.0330140594698725,\n \"acc_norm\": 0.7543859649122807,\n \"acc_norm_stderr\": 0.0330140594698725\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2631578947368421,\n \"mc1_stderr\": 0.015415241740237015,\n \"mc2\": 0.3977280700455547,\n \"mc2_stderr\": 0.013876577048055314\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.012370922527262006\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19257012888551933,\n \"acc_stderr\": 0.010861483868509948\n }\n}\n```", "repo_url": "https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-30-10.836687.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["**/details_harness|winogrande|5_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T06-30-10.836687.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T06_30_10.836687", "path": ["results_2023-12-11T06-30-10.836687.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T06-30-10.836687.parquet"]}]}]} | 2023-12-11T06:33:45+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-instruct-gamma-7b on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T06:30:10.836687(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-instruct-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:30:10.836687(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-instruct-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:30:10.836687(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of stabilityai/japanese-stablelm-instruct-gamma-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model stabilityai/japanese-stablelm-instruct-gamma-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T06:30:10.836687(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
33d4f9439aae1d559b2fde198e0e702c2bfd68e9 |
# Dataset of Takahata Nodoka
This is the dataset of Takahata Nodoka, containing 421 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 421 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 967 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1081 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 421 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 421 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 421 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 967 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 967 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 835 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1081 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1081 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/takahata_nodoka_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T06:41:01+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T06:58:40+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Takahata Nodoka
==========================
This is the dataset of Takahata Nodoka, containing 421 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
cff405c52ad00791091aa82564e63242c9321086 |
# Dataset Card for Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/wang7776/Llama-2-7b-chat-hf-10-sparsity
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [wang7776/Llama-2-7b-chat-hf-10-sparsity](https://huggingface.co/wang7776/Llama-2-7b-chat-hf-10-sparsity) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_wang7776__Llama-2-7b-chat-hf-10-sparsity",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T06:47:13.495112](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__Llama-2-7b-chat-hf-10-sparsity/blob/main/results_2023-12-11T06-47-13.495112.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4828523298719361,
"acc_stderr": 0.034344882175746286,
"acc_norm": 0.4877369282737841,
"acc_norm_stderr": 0.03509947561893186,
"mc1": 0.30599755201958384,
"mc1_stderr": 0.016132229728155045,
"mc2": 0.4528805407291649,
"mc2_stderr": 0.015739032860856518
},
"harness|arc:challenge|25": {
"acc": 0.49829351535836175,
"acc_stderr": 0.01461130570505699,
"acc_norm": 0.5315699658703071,
"acc_norm_stderr": 0.014582236460866977
},
"harness|hellaswag|10": {
"acc": 0.5933081059549891,
"acc_stderr": 0.0049021253880022,
"acc_norm": 0.7826130252937662,
"acc_norm_stderr": 0.004116250643976751
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4222222222222222,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.4222222222222222,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.48026315789473684,
"acc_stderr": 0.040657710025626036,
"acc_norm": 0.48026315789473684,
"acc_norm_stderr": 0.040657710025626036
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5283018867924528,
"acc_stderr": 0.030723535249006107,
"acc_norm": 0.5283018867924528,
"acc_norm_stderr": 0.030723535249006107
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.04179596617581,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.04179596617581
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3930635838150289,
"acc_stderr": 0.03724249595817731,
"acc_norm": 0.3930635838150289,
"acc_norm_stderr": 0.03724249595817731
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.04220773659171453,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.04220773659171453
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.41702127659574467,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.41702127659574467,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3684210526315789,
"acc_stderr": 0.04537815354939392,
"acc_norm": 0.3684210526315789,
"acc_norm_stderr": 0.04537815354939392
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.30423280423280424,
"acc_stderr": 0.023695415009463087,
"acc_norm": 0.30423280423280424,
"acc_norm_stderr": 0.023695415009463087
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.24603174603174602,
"acc_stderr": 0.03852273364924315,
"acc_norm": 0.24603174603174602,
"acc_norm_stderr": 0.03852273364924315
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5258064516129032,
"acc_stderr": 0.02840609505765332,
"acc_norm": 0.5258064516129032,
"acc_norm_stderr": 0.02840609505765332
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.35960591133004927,
"acc_stderr": 0.03376458246509566,
"acc_norm": 0.35960591133004927,
"acc_norm_stderr": 0.03376458246509566
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6121212121212121,
"acc_stderr": 0.038049136539710114,
"acc_norm": 0.6121212121212121,
"acc_norm_stderr": 0.038049136539710114
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.601010101010101,
"acc_stderr": 0.03488901616852732,
"acc_norm": 0.601010101010101,
"acc_norm_stderr": 0.03488901616852732
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.689119170984456,
"acc_stderr": 0.03340361906276586,
"acc_norm": 0.689119170984456,
"acc_norm_stderr": 0.03340361906276586
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4256410256410256,
"acc_stderr": 0.025069094387296546,
"acc_norm": 0.4256410256410256,
"acc_norm_stderr": 0.025069094387296546
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.02684205787383371,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.02684205787383371
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.42436974789915966,
"acc_stderr": 0.03210479051015776,
"acc_norm": 0.42436974789915966,
"acc_norm_stderr": 0.03210479051015776
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6587155963302752,
"acc_stderr": 0.020328612816592446,
"acc_norm": 0.6587155963302752,
"acc_norm_stderr": 0.020328612816592446
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.03167468706828977,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.03167468706828977
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.033540924375915195,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.033540924375915195
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6540084388185654,
"acc_stderr": 0.03096481058878671,
"acc_norm": 0.6540084388185654,
"acc_norm_stderr": 0.03096481058878671
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5874439461883408,
"acc_stderr": 0.03304062175449297,
"acc_norm": 0.5874439461883408,
"acc_norm_stderr": 0.03304062175449297
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5648854961832062,
"acc_stderr": 0.04348208051644858,
"acc_norm": 0.5648854961832062,
"acc_norm_stderr": 0.04348208051644858
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.043913262867240704,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.043913262867240704
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04766075165356461,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04766075165356461
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5644171779141104,
"acc_stderr": 0.03895632464138937,
"acc_norm": 0.5644171779141104,
"acc_norm_stderr": 0.03895632464138937
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.32142857142857145,
"acc_stderr": 0.04432804055291519,
"acc_norm": 0.32142857142857145,
"acc_norm_stderr": 0.04432804055291519
},
"harness|hendrycksTest-management|5": {
"acc": 0.6990291262135923,
"acc_stderr": 0.045416094465039476,
"acc_norm": 0.6990291262135923,
"acc_norm_stderr": 0.045416094465039476
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7136752136752137,
"acc_stderr": 0.02961432369045665,
"acc_norm": 0.7136752136752137,
"acc_norm_stderr": 0.02961432369045665
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6807151979565773,
"acc_stderr": 0.016671261749538726,
"acc_norm": 0.6807151979565773,
"acc_norm_stderr": 0.016671261749538726
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5144508670520231,
"acc_stderr": 0.026907849856282542,
"acc_norm": 0.5144508670520231,
"acc_norm_stderr": 0.026907849856282542
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.014219570788103986,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.014219570788103986
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5163398692810458,
"acc_stderr": 0.028614624752805434,
"acc_norm": 0.5163398692810458,
"acc_norm_stderr": 0.028614624752805434
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5659163987138264,
"acc_stderr": 0.028150232244535594,
"acc_norm": 0.5659163987138264,
"acc_norm_stderr": 0.028150232244535594
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5709876543209876,
"acc_stderr": 0.027538925613470863,
"acc_norm": 0.5709876543209876,
"acc_norm_stderr": 0.027538925613470863
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.38652482269503546,
"acc_stderr": 0.029049190342543454,
"acc_norm": 0.38652482269503546,
"acc_norm_stderr": 0.029049190342543454
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.34810951760104303,
"acc_stderr": 0.012166738993698198,
"acc_norm": 0.34810951760104303,
"acc_norm_stderr": 0.012166738993698198
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.030161911930767105,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.030161911930767105
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.46405228758169936,
"acc_stderr": 0.020175488765484036,
"acc_norm": 0.46405228758169936,
"acc_norm_stderr": 0.020175488765484036
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5454545454545454,
"acc_stderr": 0.04769300568972743,
"acc_norm": 0.5454545454545454,
"acc_norm_stderr": 0.04769300568972743
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5183673469387755,
"acc_stderr": 0.03198761546763127,
"acc_norm": 0.5183673469387755,
"acc_norm_stderr": 0.03198761546763127
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6417910447761194,
"acc_stderr": 0.03390393042268815,
"acc_norm": 0.6417910447761194,
"acc_norm_stderr": 0.03390393042268815
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7076023391812866,
"acc_stderr": 0.03488647713457922,
"acc_norm": 0.7076023391812866,
"acc_norm_stderr": 0.03488647713457922
},
"harness|truthfulqa:mc|0": {
"mc1": 0.30599755201958384,
"mc1_stderr": 0.016132229728155045,
"mc2": 0.4528805407291649,
"mc2_stderr": 0.015739032860856518
},
"harness|winogrande|5": {
"acc": 0.7158642462509865,
"acc_stderr": 0.012675392786772722
},
"harness|gsm8k|5": {
"acc": 0.18423047763457165,
"acc_stderr": 0.010678414428555006
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_wang7776__Llama-2-7b-chat-hf-10-sparsity | [
"region:us"
] | 2023-12-11T06:50:09+00:00 | {"pretty_name": "Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity", "dataset_summary": "Dataset automatically created during the evaluation run of model [wang7776/Llama-2-7b-chat-hf-10-sparsity](https://huggingface.co/wang7776/Llama-2-7b-chat-hf-10-sparsity) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_wang7776__Llama-2-7b-chat-hf-10-sparsity\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T06:47:13.495112](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__Llama-2-7b-chat-hf-10-sparsity/blob/main/results_2023-12-11T06-47-13.495112.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4828523298719361,\n \"acc_stderr\": 0.034344882175746286,\n \"acc_norm\": 0.4877369282737841,\n \"acc_norm_stderr\": 0.03509947561893186,\n \"mc1\": 0.30599755201958384,\n \"mc1_stderr\": 0.016132229728155045,\n \"mc2\": 0.4528805407291649,\n \"mc2_stderr\": 0.015739032860856518\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.49829351535836175,\n \"acc_stderr\": 0.01461130570505699,\n \"acc_norm\": 0.5315699658703071,\n \"acc_norm_stderr\": 0.014582236460866977\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5933081059549891,\n \"acc_stderr\": 0.0049021253880022,\n \"acc_norm\": 0.7826130252937662,\n \"acc_norm_stderr\": 0.004116250643976751\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4222222222222222,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.4222222222222222,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.48026315789473684,\n \"acc_stderr\": 0.040657710025626036,\n \"acc_norm\": 0.48026315789473684,\n \"acc_norm_stderr\": 0.040657710025626036\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5283018867924528,\n \"acc_stderr\": 0.030723535249006107,\n \"acc_norm\": 0.5283018867924528,\n \"acc_norm_stderr\": 0.030723535249006107\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.04179596617581,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.04179596617581\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3930635838150289,\n \"acc_stderr\": 0.03724249595817731,\n \"acc_norm\": 0.3930635838150289,\n \"acc_norm_stderr\": 0.03724249595817731\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171453,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171453\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.41702127659574467,\n \"acc_stderr\": 0.03223276266711712,\n \"acc_norm\": 0.41702127659574467,\n \"acc_norm_stderr\": 0.03223276266711712\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n \"acc_stderr\": 0.04537815354939392,\n \"acc_norm\": 0.3684210526315789,\n \"acc_norm_stderr\": 0.04537815354939392\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30423280423280424,\n \"acc_stderr\": 0.023695415009463087,\n \"acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.023695415009463087\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.03852273364924315,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.03852273364924315\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5258064516129032,\n \"acc_stderr\": 0.02840609505765332,\n \"acc_norm\": 0.5258064516129032,\n \"acc_norm_stderr\": 0.02840609505765332\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.03376458246509566,\n \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.03376458246509566\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6121212121212121,\n \"acc_stderr\": 0.038049136539710114,\n \"acc_norm\": 0.6121212121212121,\n \"acc_norm_stderr\": 0.038049136539710114\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.601010101010101,\n \"acc_stderr\": 0.03488901616852732,\n \"acc_norm\": 0.601010101010101,\n \"acc_norm_stderr\": 0.03488901616852732\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.689119170984456,\n \"acc_stderr\": 0.03340361906276586,\n \"acc_norm\": 0.689119170984456,\n \"acc_norm_stderr\": 0.03340361906276586\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4256410256410256,\n \"acc_stderr\": 0.025069094387296546,\n \"acc_norm\": 0.4256410256410256,\n \"acc_norm_stderr\": 0.025069094387296546\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.42436974789915966,\n \"acc_stderr\": 0.03210479051015776,\n \"acc_norm\": 0.42436974789915966,\n \"acc_norm_stderr\": 0.03210479051015776\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6587155963302752,\n \"acc_stderr\": 0.020328612816592446,\n \"acc_norm\": 0.6587155963302752,\n \"acc_norm_stderr\": 0.020328612816592446\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.03167468706828977,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.03167468706828977\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.033540924375915195,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.033540924375915195\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6540084388185654,\n \"acc_stderr\": 0.03096481058878671,\n \"acc_norm\": 0.6540084388185654,\n \"acc_norm_stderr\": 0.03096481058878671\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5874439461883408,\n \"acc_stderr\": 0.03304062175449297,\n \"acc_norm\": 0.5874439461883408,\n \"acc_norm_stderr\": 0.03304062175449297\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.043913262867240704,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.043913262867240704\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04766075165356461,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04766075165356461\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5644171779141104,\n \"acc_stderr\": 0.03895632464138937,\n \"acc_norm\": 0.5644171779141104,\n \"acc_norm_stderr\": 0.03895632464138937\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.045416094465039476,\n \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.045416094465039476\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7136752136752137,\n \"acc_stderr\": 0.02961432369045665,\n \"acc_norm\": 0.7136752136752137,\n \"acc_norm_stderr\": 0.02961432369045665\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6807151979565773,\n \"acc_stderr\": 0.016671261749538726,\n \"acc_norm\": 0.6807151979565773,\n \"acc_norm_stderr\": 0.016671261749538726\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5144508670520231,\n \"acc_stderr\": 0.026907849856282542,\n \"acc_norm\": 0.5144508670520231,\n \"acc_norm_stderr\": 0.026907849856282542\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n \"acc_stderr\": 0.014219570788103986,\n \"acc_norm\": 0.23687150837988827,\n \"acc_norm_stderr\": 0.014219570788103986\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5163398692810458,\n \"acc_stderr\": 0.028614624752805434,\n \"acc_norm\": 0.5163398692810458,\n \"acc_norm_stderr\": 0.028614624752805434\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5659163987138264,\n \"acc_stderr\": 0.028150232244535594,\n \"acc_norm\": 0.5659163987138264,\n \"acc_norm_stderr\": 0.028150232244535594\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5709876543209876,\n \"acc_stderr\": 0.027538925613470863,\n \"acc_norm\": 0.5709876543209876,\n \"acc_norm_stderr\": 0.027538925613470863\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.38652482269503546,\n \"acc_stderr\": 0.029049190342543454,\n \"acc_norm\": 0.38652482269503546,\n \"acc_norm_stderr\": 0.029049190342543454\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.34810951760104303,\n \"acc_stderr\": 0.012166738993698198,\n \"acc_norm\": 0.34810951760104303,\n \"acc_norm_stderr\": 0.012166738993698198\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.030161911930767105,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.030161911930767105\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.46405228758169936,\n \"acc_stderr\": 0.020175488765484036,\n \"acc_norm\": 0.46405228758169936,\n \"acc_norm_stderr\": 0.020175488765484036\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5454545454545454,\n \"acc_stderr\": 0.04769300568972743,\n \"acc_norm\": 0.5454545454545454,\n \"acc_norm_stderr\": 0.04769300568972743\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5183673469387755,\n \"acc_stderr\": 0.03198761546763127,\n \"acc_norm\": 0.5183673469387755,\n \"acc_norm_stderr\": 0.03198761546763127\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6417910447761194,\n \"acc_stderr\": 0.03390393042268815,\n \"acc_norm\": 0.6417910447761194,\n \"acc_norm_stderr\": 0.03390393042268815\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7076023391812866,\n \"acc_stderr\": 0.03488647713457922,\n \"acc_norm\": 0.7076023391812866,\n \"acc_norm_stderr\": 0.03488647713457922\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.30599755201958384,\n \"mc1_stderr\": 0.016132229728155045,\n \"mc2\": 0.4528805407291649,\n \"mc2_stderr\": 0.015739032860856518\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7158642462509865,\n \"acc_stderr\": 0.012675392786772722\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.18423047763457165,\n \"acc_stderr\": 0.010678414428555006\n }\n}\n```", "repo_url": "https://huggingface.co/wang7776/Llama-2-7b-chat-hf-10-sparsity", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T06-47-13.495112.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["**/details_harness|winogrande|5_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T06-47-13.495112.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T06_47_13.495112", "path": ["results_2023-12-11T06-47-13.495112.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T06-47-13.495112.parquet"]}]}]} | 2023-12-11T06:50:53+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model wang7776/Llama-2-7b-chat-hf-10-sparsity on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T06:47:13.495112(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model wang7776/Llama-2-7b-chat-hf-10-sparsity on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:47:13.495112(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model wang7776/Llama-2-7b-chat-hf-10-sparsity on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T06:47:13.495112(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
27,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of wang7776/Llama-2-7b-chat-hf-10-sparsity## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model wang7776/Llama-2-7b-chat-hf-10-sparsity on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T06:47:13.495112(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
03842d62d862cfbc00e8bd036e2d11bc8f0f0150 | dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 23410479
num_examples: 3000
download_size: 6521212
dataset_size: 23410479
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | sukrit-arora/github-issues | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:document-retrieval",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2023-12-11T07:11:38+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-retrieval"], "task_ids": ["multi-class-classification", "multi-label-classification", "document-retrieval"], "pretty_name": "HuggingFace Github Issues", "tags": []} | 2023-12-11T07:25:11+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-text-retrieval #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-document-retrieval #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us
| dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 23410479
num_examples: 3000
download_size: 6521212
dataset_size: 23410479
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
--- | [] | [
"TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-document-retrieval #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n"
] | [
104
] | [
"passage: TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-document-retrieval #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n"
] |
a0ce7203dd35509361212367ae542ab3ff9929fc |
# Dataset of Momoshina Fumika
This is the dataset of Momoshina Fumika, containing 231 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 231 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 541 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 604 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 231 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 231 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 231 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 541 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 541 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 425 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 604 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 604 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/momoshina_fumika_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T07:16:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T07:21:48+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Momoshina Fumika
===========================
This is the dataset of Momoshina Fumika, containing 231 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
c4328edb1a49b6e8e83d23cf394ef0a0da2e9eb8 |
# Dataset Card for Orca DPO Pairs Dutch
## Dataset Description
This is a Dutch recreation of [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). It contains single-turn responses to a prompt and sometimes a system prompt by two different systems, intended for the preference optimalisation of language models (such as DPO). The chosen answers is always `gpt-4-1106-preview`, the rejected answer is always [Rijgersberg/GEITje-7B-chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat).
Note that it is a naive assumption that GPT 4 is always better than another language model. For a better balance, some kind of scoring of the answers should be done, with, for instance, Argilla's [`distilabel`](https://distilabel.argilla.io/latest/) but I do not have the funds to run this rating with a state-of-the-art model (like GPT4). If you do, please improve dataset and share the result!
## Dataset Structure
The dataset has two splits. The number of examples per split is shown as follows:
| train_prefs | test_prefs |
|:-------:|:-----------:|
| 9,929 | 1,104 |
That means that around 11.12% of the data is reserved for testing. One may find that this dataset is smaller than the original dataset. This is caused by the OpenAI automatic filtering. More information in the sections below.
### Data Instances
The dataset does not have system messages.
```python
{
'prompt': 'Premisse: Een artikel weerlegt de gangbare wijsheid dat Klaus Fuchs en David Greenglass de eerste atoomgeheimen aan de Sovjets hebben gelekt.\n\nHypothese: Greenglass en Fuchs waren niet verantwoordelijk voor het uitlekken van geheimen aan de Sovjets.\n\nVolgt de premisse uit de hypothese?\n\nKies uit: (A). ja (B). het is niet mogelijk om dit te zeggen (C). nee',
'prompt_id': 'b56f1662fedf7f43d1c3828e736e56146c21febf2711ae22910e7ee7280aed5e',
'chosen': [
{'content': 'Je bent een behulpzame assistent, die altijd uitleg geeft. Denk alsof je antwoord geeft aan een kind van vijf jaar oud.', 'role': 'system'},
{'content': 'Premisse: Een artikel weerlegt de gangbare wijsheid dat Klaus Fuchs en David Greenglass de eerste atoomgeheimen aan de Sovjets hebben gelekt.\n\nHypothese: Greenglass en Fuchs waren niet verantwoordelijk voor het uitlekken van geheimen aan de Sovjets.\n\nVolgt de premisse uit de hypothese?\n\nKies uit: (A). ja (B). het is niet mogelijk om dit te zeggen (C). nee', 'role': 'user'},
{'content': 'Stel je voor je hebt een doos met kleurpotloden en iedereen zegt dat je beste vriendje de blauwe en groene potloden heeft gepakt en buiten de lijntjes heeft gekleurd. Maar dan komt er iemand anders en laat zien dat niet jouw vriendje, maar iemand anders het heeft gedaan.\n\nDe premisse is eigenlijk dat nieuwe verhaal, waar iemand zegt: "Het was niet jouw vriendje die de kleuren heeft gebruikt om buiten de lijntjes te kleuren." De hypothese is wat we nog aan het uitzoeken zijn, of het echt waar is dat jouw vriendje dat niet heeft gedaan.\n\nDus, als de nieuwe informatie klopt (de premisse), dan zou dat inderdaad betekenen dat jouw vriendje (Greenglass en Fuchs) niet degene was die de potloden (geheimen) buiten de lijntjes (aan de Sovjets) heeft gekleurd. Maar we moeten nog steeds kijken naar de nieuwe informatie om zeker te zijn. \n\nDus het antwoord op de vraag of de premisse uit de hypothese volgt, is (B) het is niet mogelijk om dit te zeggen zonder meer informatie. We moeten bewijs zien dat echt laat zien dat jouw vriendje niet de geheimen heeft doorverteld. Pas dan kunnen we zeggen dat de premisse uit de hypothese volgt.', 'role': 'assistant'}
],
'rejected': [
{'content': 'Je bent een behulpzame assistent, die altijd uitleg geeft. Denk alsof je antwoord geeft aan een kind van vijf jaar oud.', 'role': 'system'},
{'content': 'Premisse: Een artikel weerlegt de gangbare wijsheid dat Klaus Fuchs en David Greenglass de eerste atoomgeheimen aan de Sovjets hebben gelekt.\n\nHypothese: Greenglass en Fuchs waren niet verantwoordelijk voor het uitlekken van geheimen aan de Sovjets.\n\nVolgt de premisse uit de hypothese?\n\nKies uit: (A). ja (B). het is niet mogelijk om dit te zeggen (C). nee', 'role': 'user'},
{'content': 'Toen wij geboren werden, wisten we heel wat we wilden doen en nog veel meer wat we niet wilden doen. Dat idee is een premisse genoemd. Ons doel is om je te vertellen dat, toen men de regels uitlegde over wie de eerste atoomgeheimen heeft doorgespeeld aan de Sovjets, we niet zeker weten of de bewering juist is. Stel je voor dat je naar een vriend gaat die een spelletje speelt, maar in plaats van zelf deel te nemen, moet je gewoon langskomen en hun idee of veronderstelling bevestigen of ontkennen. Dit spel waar wij het over hebben wordt geschiedenis genoemd.', 'role': 'assistant'}
]
}
```
### Data Fields
- **prompt**: the initial user prompt
- **prompt_id**: the unique hash of the prompt
- **chosen**: (gpt-4-1106-preview) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content
- **rejected**: (GEITje-7B-Chat) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content
## Dataset Creation
This dataset was created with [this repository](https://github.com/BramVanroy/dutch-instruction-datasets/). The `translate-hf` script for translating the system and instruction columns, and `answer-hf` to create answers for the translated query.
So in a first step, the English first user message was translated alongside the system message with `gpt-4-1106-preview` to Dutch. This prompt is then fed to `gpt-4-1106-preview` and `GEITje-7B-chat` to generate responses, so only the user prompt is translated. The data is therefore single-turn.
### Data filtering
On top of the automatic content filtering and parsing issues, additional filtering was also done. All the data is preserved in separate branches if you would like a historical view of the process of how the data was created.
- `1-translated-sys-and-q`: the initial translations of the first user message/prompt and the system prompt
- `2-gpt-4-turbo-answers`: adds the answers by GPT4
- `3-geitje-7b-chat-answers`: adds the answers by GEITje-7B-chat
- `4-llama-2-13b-chat-dutch-answers`: adds the answers by Llama 2 13B Chat Dutch - but because of the poor quality this data is not used in the final dataset
- `5-merged`: merges the instructions and the answers of the previous branches
- `6-gpt4-geitje`: removes Llama 2 output
- `7-lid`: added [language identification](https://github.com/BramVanroy/dutch-instruction-datasets/blob/main/src/dutch_data/scripts/add_lid.py) to the columns with [fastText](https://huggingface.co/facebook/fasttext-language-identification), which is based on the concatenation of all `content` keys in the `chosen` and `rejected` column.
- `8-filtered`: removed samples where the detected text was not Dutch. (Note that this may remove translation-focused samples!). [Other filters](https://github.com/BramVanroy/dutch-instruction-datasets/blob/main/src/dutch_data/scripts/filter_dutch.py)
- samples with non-Latin characters are removed (very strict filtering, removes any translation tasks with non-Latin languages)
- samples with occurrences of "AI-assistent" or "AI-taalmodel" (and other derivations) are removed because these are often responses in the sense of "As an AI model, I cannot ...", which is not too useful
- samples with mentions of ChatGPT, GPT 3/4, OpenAI or ShareGPT are removed
- samples with mentions of the typical "knowledge cutoff" are removed
- samples with apologies such as "spijt me" are removed, as we are more interested in factual information and content-filled responses
- `main`: the main, default branch. Removes all "irrelevant" columns (like English messages, language identification)
### Source Data
#### Initial Data Collection and Normalization
Initial data preparation by [Intel](https://huggingface.co/datasets/Intel/orca_dpo_pairs), which in turn started from [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
## Considerations for Using the Data
Note that except for a manual analysis of around 100 random samples, this dataset has not been verified or checked for issues. However, the OpenAI filters blocked requests for translation and generation, which may indicate that some of the original prompts may contained ambiguous or potentially inappropriate content that has now been filtered.
During checking, I found that the translations of the prompt *sometimes* have issues, where parts of the content was not translated. This can happen when the model interprets the input as a command, for instance, instead of translating it. This is especially true for the system messages.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. It is likely that biases remain in the dataset so use with caution.
### Licensing Information
This dataset was generated (either in part or in full) with GPT-4 (`gpt-4-1106-preview`), OpenAI’s large-scale language-generation model. Therefore commercial usage is not allowed.
If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies.
### Contributions
Thanks to Michiel Buisman of [UWV](https://www.uwv.nl/particulieren/index.aspx) for reaching out and making the creation of this dataset possible with access to Azure's API.
| BramVanroy/orca_dpo_pairs_dutch | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:nl",
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-11T07:34:55+00:00 | {"language": ["nl"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "text-generation"], "pretty_name": "Orca DPO Pairs Dutch", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_prefs", "num_bytes": 38405140.71630563, "num_examples": 9929}, {"name": "test_prefs", "num_bytes": 4270246.283694372, "num_examples": 1104}], "download_size": 24140645, "dataset_size": 42675387.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}]} | 2024-02-01T12:49:01+00:00 | [] | [
"nl"
] | TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Dutch #license-cc-by-nc-4.0 #region-us
| Dataset Card for Orca DPO Pairs Dutch
=====================================
Dataset Description
-------------------
This is a Dutch recreation of Intel/orca\_dpo\_pairs. It contains single-turn responses to a prompt and sometimes a system prompt by two different systems, intended for the preference optimalisation of language models (such as DPO). The chosen answers is always 'gpt-4-1106-preview', the rejected answer is always Rijgersberg/GEITje-7B-chat.
Note that it is a naive assumption that GPT 4 is always better than another language model. For a better balance, some kind of scoring of the answers should be done, with, for instance, Argilla's 'distilabel' but I do not have the funds to run this rating with a state-of-the-art model (like GPT4). If you do, please improve dataset and share the result!
Dataset Structure
-----------------
The dataset has two splits. The number of examples per split is shown as follows:
That means that around 11.12% of the data is reserved for testing. One may find that this dataset is smaller than the original dataset. This is caused by the OpenAI automatic filtering. More information in the sections below.
### Data Instances
The dataset does not have system messages.
### Data Fields
* prompt: the initial user prompt
* prompt\_id: the unique hash of the prompt
* chosen: (gpt-4-1106-preview) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content
* rejected: (GEITje-7B-Chat) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content
Dataset Creation
----------------
This dataset was created with this repository. The 'translate-hf' script for translating the system and instruction columns, and 'answer-hf' to create answers for the translated query.
So in a first step, the English first user message was translated alongside the system message with 'gpt-4-1106-preview' to Dutch. This prompt is then fed to 'gpt-4-1106-preview' and 'GEITje-7B-chat' to generate responses, so only the user prompt is translated. The data is therefore single-turn.
### Data filtering
On top of the automatic content filtering and parsing issues, additional filtering was also done. All the data is preserved in separate branches if you would like a historical view of the process of how the data was created.
* '1-translated-sys-and-q': the initial translations of the first user message/prompt and the system prompt
* '2-gpt-4-turbo-answers': adds the answers by GPT4
* '3-geitje-7b-chat-answers': adds the answers by GEITje-7B-chat
* '4-llama-2-13b-chat-dutch-answers': adds the answers by Llama 2 13B Chat Dutch - but because of the poor quality this data is not used in the final dataset
* '5-merged': merges the instructions and the answers of the previous branches
* '6-gpt4-geitje': removes Llama 2 output
* '7-lid': added language identification to the columns with fastText, which is based on the concatenation of all 'content' keys in the 'chosen' and 'rejected' column.
* '8-filtered': removed samples where the detected text was not Dutch. (Note that this may remove translation-focused samples!). Other filters
+ samples with non-Latin characters are removed (very strict filtering, removes any translation tasks with non-Latin languages)
+ samples with occurrences of "AI-assistent" or "AI-taalmodel" (and other derivations) are removed because these are often responses in the sense of "As an AI model, I cannot ...", which is not too useful
+ samples with mentions of ChatGPT, GPT 3/4, OpenAI or ShareGPT are removed
+ samples with mentions of the typical "knowledge cutoff" are removed
+ samples with apologies such as "spijt me" are removed, as we are more interested in factual information and content-filled responses
* 'main': the main, default branch. Removes all "irrelevant" columns (like English messages, language identification)
### Source Data
#### Initial Data Collection and Normalization
Initial data preparation by Intel, which in turn started from OpenOrca.
Considerations for Using the Data
---------------------------------
Note that except for a manual analysis of around 100 random samples, this dataset has not been verified or checked for issues. However, the OpenAI filters blocked requests for translation and generation, which may indicate that some of the original prompts may contained ambiguous or potentially inappropriate content that has now been filtered.
During checking, I found that the translations of the prompt *sometimes* have issues, where parts of the content was not translated. This can happen when the model interprets the input as a command, for instance, instead of translating it. This is especially true for the system messages.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. It is likely that biases remain in the dataset so use with caution.
### Licensing Information
This dataset was generated (either in part or in full) with GPT-4 ('gpt-4-1106-preview'), OpenAI’s large-scale language-generation model. Therefore commercial usage is not allowed.
If you use this dataset, you must also follow the Sharing and Usage policies.
### Contributions
Thanks to Michiel Buisman of UWV for reaching out and making the creation of this dataset possible with access to Azure's API.
| [
"### Data Instances\n\n\nThe dataset does not have system messages.",
"### Data Fields\n\n\n* prompt: the initial user prompt\n* prompt\\_id: the unique hash of the prompt\n* chosen: (gpt-4-1106-preview) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n* rejected: (GEITje-7B-Chat) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n\n\nDataset Creation\n----------------\n\n\nThis dataset was created with this repository. The 'translate-hf' script for translating the system and instruction columns, and 'answer-hf' to create answers for the translated query.\n\n\nSo in a first step, the English first user message was translated alongside the system message with 'gpt-4-1106-preview' to Dutch. This prompt is then fed to 'gpt-4-1106-preview' and 'GEITje-7B-chat' to generate responses, so only the user prompt is translated. The data is therefore single-turn.",
"### Data filtering\n\n\nOn top of the automatic content filtering and parsing issues, additional filtering was also done. All the data is preserved in separate branches if you would like a historical view of the process of how the data was created.\n\n\n* '1-translated-sys-and-q': the initial translations of the first user message/prompt and the system prompt\n* '2-gpt-4-turbo-answers': adds the answers by GPT4\n* '3-geitje-7b-chat-answers': adds the answers by GEITje-7B-chat\n* '4-llama-2-13b-chat-dutch-answers': adds the answers by Llama 2 13B Chat Dutch - but because of the poor quality this data is not used in the final dataset\n* '5-merged': merges the instructions and the answers of the previous branches\n* '6-gpt4-geitje': removes Llama 2 output\n* '7-lid': added language identification to the columns with fastText, which is based on the concatenation of all 'content' keys in the 'chosen' and 'rejected' column.\n* '8-filtered': removed samples where the detected text was not Dutch. (Note that this may remove translation-focused samples!). Other filters\n\t+ samples with non-Latin characters are removed (very strict filtering, removes any translation tasks with non-Latin languages)\n\t+ samples with occurrences of \"AI-assistent\" or \"AI-taalmodel\" (and other derivations) are removed because these are often responses in the sense of \"As an AI model, I cannot ...\", which is not too useful\n\t+ samples with mentions of ChatGPT, GPT 3/4, OpenAI or ShareGPT are removed\n\t+ samples with mentions of the typical \"knowledge cutoff\" are removed\n\t+ samples with apologies such as \"spijt me\" are removed, as we are more interested in factual information and content-filled responses\n* 'main': the main, default branch. Removes all \"irrelevant\" columns (like English messages, language identification)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data preparation by Intel, which in turn started from OpenOrca.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nNote that except for a manual analysis of around 100 random samples, this dataset has not been verified or checked for issues. However, the OpenAI filters blocked requests for translation and generation, which may indicate that some of the original prompts may contained ambiguous or potentially inappropriate content that has now been filtered.\n\n\nDuring checking, I found that the translations of the prompt *sometimes* have issues, where parts of the content was not translated. This can happen when the model interprets the input as a command, for instance, instead of translating it. This is especially true for the system messages.",
"### Discussion of Biases\n\n\nAs with any machine-generated texts, users should be aware of potential biases that are included in this dataset. It is likely that biases remain in the dataset so use with caution.",
"### Licensing Information\n\n\nThis dataset was generated (either in part or in full) with GPT-4 ('gpt-4-1106-preview'), OpenAI’s large-scale language-generation model. Therefore commercial usage is not allowed.\n\n\nIf you use this dataset, you must also follow the Sharing and Usage policies.",
"### Contributions\n\n\nThanks to Michiel Buisman of UWV for reaching out and making the creation of this dataset possible with access to Azure's API."
] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Dutch #license-cc-by-nc-4.0 #region-us \n",
"### Data Instances\n\n\nThe dataset does not have system messages.",
"### Data Fields\n\n\n* prompt: the initial user prompt\n* prompt\\_id: the unique hash of the prompt\n* chosen: (gpt-4-1106-preview) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n* rejected: (GEITje-7B-Chat) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n\n\nDataset Creation\n----------------\n\n\nThis dataset was created with this repository. The 'translate-hf' script for translating the system and instruction columns, and 'answer-hf' to create answers for the translated query.\n\n\nSo in a first step, the English first user message was translated alongside the system message with 'gpt-4-1106-preview' to Dutch. This prompt is then fed to 'gpt-4-1106-preview' and 'GEITje-7B-chat' to generate responses, so only the user prompt is translated. The data is therefore single-turn.",
"### Data filtering\n\n\nOn top of the automatic content filtering and parsing issues, additional filtering was also done. All the data is preserved in separate branches if you would like a historical view of the process of how the data was created.\n\n\n* '1-translated-sys-and-q': the initial translations of the first user message/prompt and the system prompt\n* '2-gpt-4-turbo-answers': adds the answers by GPT4\n* '3-geitje-7b-chat-answers': adds the answers by GEITje-7B-chat\n* '4-llama-2-13b-chat-dutch-answers': adds the answers by Llama 2 13B Chat Dutch - but because of the poor quality this data is not used in the final dataset\n* '5-merged': merges the instructions and the answers of the previous branches\n* '6-gpt4-geitje': removes Llama 2 output\n* '7-lid': added language identification to the columns with fastText, which is based on the concatenation of all 'content' keys in the 'chosen' and 'rejected' column.\n* '8-filtered': removed samples where the detected text was not Dutch. (Note that this may remove translation-focused samples!). Other filters\n\t+ samples with non-Latin characters are removed (very strict filtering, removes any translation tasks with non-Latin languages)\n\t+ samples with occurrences of \"AI-assistent\" or \"AI-taalmodel\" (and other derivations) are removed because these are often responses in the sense of \"As an AI model, I cannot ...\", which is not too useful\n\t+ samples with mentions of ChatGPT, GPT 3/4, OpenAI or ShareGPT are removed\n\t+ samples with mentions of the typical \"knowledge cutoff\" are removed\n\t+ samples with apologies such as \"spijt me\" are removed, as we are more interested in factual information and content-filled responses\n* 'main': the main, default branch. Removes all \"irrelevant\" columns (like English messages, language identification)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data preparation by Intel, which in turn started from OpenOrca.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nNote that except for a manual analysis of around 100 random samples, this dataset has not been verified or checked for issues. However, the OpenAI filters blocked requests for translation and generation, which may indicate that some of the original prompts may contained ambiguous or potentially inappropriate content that has now been filtered.\n\n\nDuring checking, I found that the translations of the prompt *sometimes* have issues, where parts of the content was not translated. This can happen when the model interprets the input as a command, for instance, instead of translating it. This is especially true for the system messages.",
"### Discussion of Biases\n\n\nAs with any machine-generated texts, users should be aware of potential biases that are included in this dataset. It is likely that biases remain in the dataset so use with caution.",
"### Licensing Information\n\n\nThis dataset was generated (either in part or in full) with GPT-4 ('gpt-4-1106-preview'), OpenAI’s large-scale language-generation model. Therefore commercial usage is not allowed.\n\n\nIf you use this dataset, you must also follow the Sharing and Usage policies.",
"### Contributions\n\n\nThanks to Michiel Buisman of UWV for reaching out and making the creation of this dataset possible with access to Azure's API."
] | [
56,
15,
246,
497,
4,
174,
53,
77,
37
] | [
"passage: TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-1K<n<10K #language-Dutch #license-cc-by-nc-4.0 #region-us \n### Data Instances\n\n\nThe dataset does not have system messages.### Data Fields\n\n\n* prompt: the initial user prompt\n* prompt\\_id: the unique hash of the prompt\n* chosen: (gpt-4-1106-preview) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n* rejected: (GEITje-7B-Chat) list of messages (dictionaries) where each dictionary has a role (user, assistant, system) and content\n\n\nDataset Creation\n----------------\n\n\nThis dataset was created with this repository. The 'translate-hf' script for translating the system and instruction columns, and 'answer-hf' to create answers for the translated query.\n\n\nSo in a first step, the English first user message was translated alongside the system message with 'gpt-4-1106-preview' to Dutch. This prompt is then fed to 'gpt-4-1106-preview' and 'GEITje-7B-chat' to generate responses, so only the user prompt is translated. The data is therefore single-turn."
] |
82b89a6c2f7a745f9d6180e9c348cc7fc278d0ac | # Dataset Card for "ds1_names_10_lora_merge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/ds1_names_10_lora_merge | [
"region:us"
] | 2023-12-11T07:53:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44101.0, "num_examples": 300}, {"name": "validation", "num_bytes": 12243.333333333334, "num_examples": 100}], "download_size": 0, "dataset_size": 56344.333333333336}} | 2023-12-11T07:54:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ds1_names_10_lora_merge"
More Information needed | [
"# Dataset Card for \"ds1_names_10_lora_merge\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ds1_names_10_lora_merge\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ds1_names_10_lora_merge\"\n\nMore Information needed"
] |
e62e2b46564c54204b878ddd6f247ae537b535f0 | # Dataset Card for "ds2_names_10_lora_merge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/ds2_names_10_lora_merge | [
"region:us"
] | 2023-12-11T07:53:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44101.0, "num_examples": 300}, {"name": "validation", "num_bytes": 12243.333333333334, "num_examples": 100}], "download_size": 0, "dataset_size": 56344.333333333336}} | 2023-12-11T07:54:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ds2_names_10_lora_merge"
More Information needed | [
"# Dataset Card for \"ds2_names_10_lora_merge\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ds2_names_10_lora_merge\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ds2_names_10_lora_merge\"\n\nMore Information needed"
] |
077b2f100deca7414c95d293f2379185532aeb7d |
# Dataset of Hirasaka Yotsuyu
This is the dataset of Hirasaka Yotsuyu, containing 404 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 404 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 939 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 1050 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 404 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 404 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 404 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 939 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 939 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 740 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 1050 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 1050 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/hirasaka_yotsuyu_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T07:54:55+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:07:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Hirasaka Yotsuyu
===========================
This is the dataset of Hirasaka Yotsuyu, containing 404 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
9ba46a175db04a19ebf4f8798232274650e77392 |
# Dataset of Yumi Yotsuya
This is the dataset of Yumi Yotsuya, containing 92 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 92 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 211 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 256 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 92 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 92 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 92 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 211 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 211 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 184 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 256 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 256 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/yumi_yotsuya_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:15:12+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:19:02+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Yumi Yotsuya
=======================
This is the dataset of Yumi Yotsuya, containing 92 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
38b4c33eb92e1b43ad2c7d76ed1e182225fc7a6d |
# Dataset of Raiya Kaeruzaka
This is the dataset of Raiya Kaeruzaka, containing 78 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 78 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 181 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 210 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 78 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 78 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 78 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 181 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 181 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 149 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 210 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 210 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/raiya_kaeruzaka_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:25:45+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:29:55+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Raiya Kaeruzaka
==========================
This is the dataset of Raiya Kaeruzaka, containing 78 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
0419232d5d9ae1fb63a2791c203d18f920af4d33 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Licence:
Annotations & Website
The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
Images
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
| bastienp/visible-watermark-pita | [
"task_categories:object-detection",
"watermak",
"computer-vision",
"object-detection",
"region:us"
] | 2023-12-11T08:27:29+00:00 | {"task_categories": ["object-detection"], "tags": ["watermak", "computer-vision", "object-detection"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train.zip"}, {"split": "test", "path": "data/test.zip"}, {"split": "val", "path": "data/val.zip"}]}]} | 2024-01-31T16:20:10+00:00 | [] | [] | TAGS
#task_categories-object-detection #watermak #computer-vision #object-detection #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Licence:
Annotations & Website
The annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.
Images
The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Licence:\n\nAnnotations & Website\n\nThe annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.\nImages\n\nThe COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-object-detection #watermak #computer-vision #object-detection #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Licence:\n\nAnnotations & Website\n\nThe annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.\nImages\n\nThe COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
29,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
112,
8,
7,
10,
5
] | [
"passage: TAGS\n#task_categories-object-detection #watermak #computer-vision #object-detection #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Licence:\n\nAnnotations & Website\n\nThe annotations in this dataset along with this website belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License.\nImages\n\nThe COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
a93f21484681427a2efa13437e6ad0b30e388845 |
# Dataset of Kaoruko Yamano
This is the dataset of Kaoruko Yamano, containing 64 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 64 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 154 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 170 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 64 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 64 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 64 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 154 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 154 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 137 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 170 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 170 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/kaoruko_yamano_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:35:04+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:36:35+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Kaoruko Yamano
=========================
This is the dataset of Kaoruko Yamano, containing 64 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
7966ca23879910b2e68bdc3e04f7800d414f59d7 |
# Dataset of Mari Jinguuji
This is the dataset of Mari Jinguuji, containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 52 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 128 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 159 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 52 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 52 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 52 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 128 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 128 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 101 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 159 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 159 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/mari_jinguuji_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:40:52+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:43:11+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Mari Jinguuji
========================
This is the dataset of Mari Jinguuji, containing 52 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
25765e9a5e98b265fb17d7b0435d4e965090500f |
# Dataset of Sugumi Kanagata
This is the dataset of Sugumi Kanagata, containing 30 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 30 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 71 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 84 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 30 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 30 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 30 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 71 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 71 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 61 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 84 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 84 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/sugumi_kanagata_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:45:34+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:47:01+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Sugumi Kanagata
==========================
This is the dataset of Sugumi Kanagata, containing 30 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
39f867aff65ad45f602e45c8bd99290bd58d1233 | # Dataset Card for "kde4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ArmelRandy/kde4 | [
"region:us"
] | 2023-12-11T08:49:23+00:00 | {"dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "fr", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10583004, "num_examples": 20058}], "download_size": 5304712, "dataset_size": 10583004}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T08:49:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "kde4"
More Information needed | [
"# Dataset Card for \"kde4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"kde4\"\n\nMore Information needed"
] | [
6,
13
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"kde4\"\n\nMore Information needed"
] |
45a61e7c2292da55b25dc733e10855ce88fc7b49 |
# Dataset of Aika Aikawa
This is the dataset of Aika Aikawa, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 36 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 88 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 98 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 36 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 36 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 36 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 88 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 88 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 67 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 98 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 98 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/aika_aikawa_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:49:27+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:50:25+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Aika Aikawa
======================
This is the dataset of Aika Aikawa, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
da9e4bba09d493d01e9f0e33dc024f7839bd456d |
# Dataset of Ichijou Ayaka
This is the dataset of Ichijou Ayaka, containing 42 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 42 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 95 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 105 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 42 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 42 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 42 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 95 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 95 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 81 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 105 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 105 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/ichijou_ayaka_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:53:16+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:54:36+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Ichijou Ayaka
========================
This is the dataset of Ichijou Ayaka, containing 42 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
1a4116a1a29a7f012f012dde3a58bb42e73e18c8 |
# Dataset of Niiya Serina
This is the dataset of Niiya Serina, containing 27 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 27 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 68 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 70 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 27 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 27 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 27 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 68 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 68 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 65 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 70 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 70 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/niiya_serina_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T08:56:42+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T08:57:37+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Niiya Serina
=======================
This is the dataset of Niiya Serina, containing 27 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
7c5de653a8fd1bc83ec5418f61ffe5f20cbd7293 |
# Dataset of Rita Henschel
This is the dataset of Rita Henschel, containing 50 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 50 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 121 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 139 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 50 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 50 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 50 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 121 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 121 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 109 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 139 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 139 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/rita_henschel_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T09:01:48+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T09:03:43+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Rita Henschel
========================
This is the dataset of Rita Henschel, containing 50 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
afc05d052adea1f0e8ece5d44b192b861c635e24 | # Dataset Card for "save2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ktennyson6/save2 | [
"region:us"
] | 2023-12-11T09:01:54+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "relation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 981790431.631, "num_examples": 6237}], "download_size": 851998201, "dataset_size": 981790431.631}} | 2023-12-11T10:08:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "save2"
More Information needed | [
"# Dataset Card for \"save2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"save2\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"save2\"\n\nMore Information needed"
] |
569c657188c1a48c17f718c2f8417352de1a2ed9 |
# Dataset of Agatsuma Kaede
This is the dataset of Agatsuma Kaede, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 36 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 83 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 96 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 36 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 36 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 36 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 83 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 83 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 67 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 96 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 96 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/agatsuma_kaede_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T09:06:18+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T09:07:27+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Agatsuma Kaede
=========================
This is the dataset of Agatsuma Kaede, containing 36 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a4a10bf6c39e14b4466ca1022b7b414eef24ecd0 |
# Dataset of Himukai Rin
This is the dataset of Himukai Rin, containing 40 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 40 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 87 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 105 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 40 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 40 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 40 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 87 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 87 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 77 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 105 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 105 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| CyberHarem/himukai_rin_alicegearaegisexpansion | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-11T09:10:49+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-11T09:12:29+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Himukai Rin
======================
This is the dataset of Himukai Rin, containing 40 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
c988ad0ff9bf8da8489c675e94d2aa156185923b | # ASR training dataset for Serbian JuzneVesti-SR v1.0
hdl: http://hdl.handle.net/11356/1679
The JuzneVesti-SR dataset consists of audio recordings and manual transcripts from the Južne Vesti website and its host show called '15 minuta' (https://www.juznevesti.com/Tagovi/Intervju-15-minuta.sr.html).
The processing of the audio and its alignment to the manual transcripts followed the pipeline of the ParlaSpeech-HR dataset (http://hdl.handle.net/11356/1494) as closely as possible.
Segments in this dataset range from 2 to 30 seconds.
Train-dev-test split has been performed with 80:10:10 ratio.
As with the ParlaSpeech-HR dataset, two transcriptions are provided; one with transcripts in their raw form (with punctuation, capital letters, numerals) and another normalised with the same rule-based normaliser as was used in ParlaSpeech-HR dataset creation, which is lowercased, punctuation is removed and numerals are replaced with words.
Original transcripts were collected with the help of the ReLDI Centre Belgrade (https://reldi.spur.uzh.ch).
Please cite as
```
@misc{11356/1679,
title = {{ASR} training dataset for Serbian {JuzneVesti}-{SR} v1.0},
author = {Rupnik, Peter and Ljube{\v s}i{\'c}, Nikola},
url = {http://hdl.handle.net/11356/1679},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
issn = {2820-4042},
year = {2022} }
``` | 5roop/juzne_vesti | [
"size_categories:10K<n<100K",
"language:sr",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-12-11T09:23:35+00:00 | {"language": ["sr"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "Ju\u017ene Vesti", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "split", "dtype": "string"}, {"name": "transcript", "dtype": "string"}, {"name": "norm_transcript", "dtype": "string"}, {"name": "guest_name", "dtype": "string"}, {"name": "host", "dtype": "string"}, {"name": "guest_description", "dtype": "string"}, {"name": "speaker_breakdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4687838374.879606, "num_examples": 8648}, {"name": "test", "num_bytes": 584596072.5389507, "num_examples": 1081}, {"name": "dev", "num_bytes": 583281117.6094437, "num_examples": 1082}], "download_size": 5813877393, "dataset_size": 5855715565.028001}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "handle": ["http://hdl.handle.net/11356/1679"]} | 2023-12-12T08:00:11+00:00 | [] | [
"sr"
] | TAGS
#size_categories-10K<n<100K #language-Serbian #license-cc-by-sa-4.0 #region-us
| # ASR training dataset for Serbian JuzneVesti-SR v1.0
hdl: URL
The JuzneVesti-SR dataset consists of audio recordings and manual transcripts from the Južne Vesti website and its host show called '15 minuta' (URL
The processing of the audio and its alignment to the manual transcripts followed the pipeline of the ParlaSpeech-HR dataset (URL as closely as possible.
Segments in this dataset range from 2 to 30 seconds.
Train-dev-test split has been performed with 80:10:10 ratio.
As with the ParlaSpeech-HR dataset, two transcriptions are provided; one with transcripts in their raw form (with punctuation, capital letters, numerals) and another normalised with the same rule-based normaliser as was used in ParlaSpeech-HR dataset creation, which is lowercased, punctuation is removed and numerals are replaced with words.
Original transcripts were collected with the help of the ReLDI Centre Belgrade (URL).
Please cite as
| [
"# ASR training dataset for Serbian JuzneVesti-SR v1.0\n\nhdl: URL\n\nThe JuzneVesti-SR dataset consists of audio recordings and manual transcripts from the Južne Vesti website and its host show called '15 minuta' (URL \n\nThe processing of the audio and its alignment to the manual transcripts followed the pipeline of the ParlaSpeech-HR dataset (URL as closely as possible. \nSegments in this dataset range from 2 to 30 seconds. \nTrain-dev-test split has been performed with 80:10:10 ratio.\nAs with the ParlaSpeech-HR dataset, two transcriptions are provided; one with transcripts in their raw form (with punctuation, capital letters, numerals) and another normalised with the same rule-based normaliser as was used in ParlaSpeech-HR dataset creation, which is lowercased, punctuation is removed and numerals are replaced with words.\n\nOriginal transcripts were collected with the help of the ReLDI Centre Belgrade (URL).\n\nPlease cite as"
] | [
"TAGS\n#size_categories-10K<n<100K #language-Serbian #license-cc-by-sa-4.0 #region-us \n",
"# ASR training dataset for Serbian JuzneVesti-SR v1.0\n\nhdl: URL\n\nThe JuzneVesti-SR dataset consists of audio recordings and manual transcripts from the Južne Vesti website and its host show called '15 minuta' (URL \n\nThe processing of the audio and its alignment to the manual transcripts followed the pipeline of the ParlaSpeech-HR dataset (URL as closely as possible. \nSegments in this dataset range from 2 to 30 seconds. \nTrain-dev-test split has been performed with 80:10:10 ratio.\nAs with the ParlaSpeech-HR dataset, two transcriptions are provided; one with transcripts in their raw form (with punctuation, capital letters, numerals) and another normalised with the same rule-based normaliser as was used in ParlaSpeech-HR dataset creation, which is lowercased, punctuation is removed and numerals are replaced with words.\n\nOriginal transcripts were collected with the help of the ReLDI Centre Belgrade (URL).\n\nPlease cite as"
] | [
34,
239
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-Serbian #license-cc-by-sa-4.0 #region-us \n# ASR training dataset for Serbian JuzneVesti-SR v1.0\n\nhdl: URL\n\nThe JuzneVesti-SR dataset consists of audio recordings and manual transcripts from the Južne Vesti website and its host show called '15 minuta' (URL \n\nThe processing of the audio and its alignment to the manual transcripts followed the pipeline of the ParlaSpeech-HR dataset (URL as closely as possible. \nSegments in this dataset range from 2 to 30 seconds. \nTrain-dev-test split has been performed with 80:10:10 ratio.\nAs with the ParlaSpeech-HR dataset, two transcriptions are provided; one with transcripts in their raw form (with punctuation, capital letters, numerals) and another normalised with the same rule-based normaliser as was used in ParlaSpeech-HR dataset creation, which is lowercased, punctuation is removed and numerals are replaced with words.\n\nOriginal transcripts were collected with the help of the ReLDI Centre Belgrade (URL).\n\nPlease cite as"
] |
7efb28ce7c4db45338b91afa0d25d0c4f7c62121 | # speechocean762: A non-native English corpus for pronunciation scoring task
## How to use?
you can load data using
```py
speechocean762_dataset = load_dataset('seba3y/speechocean762')
```
```py
>> speechocean762_dataset
DatasetDict({
train: Dataset({
features: ['spk', 'age', 'gender', 'utt_name', 'audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total', 'words', 'words_accuracy', 'words_stress', 'words_total', 'phones', 'phones_godness'],
num_rows: 2500
})
test: Dataset({
features: ['spk', 'age', 'gender', 'utt_name', 'audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total', 'words', 'words_accuracy', 'words_stress', 'words_total', 'phones', 'phones_godness'],
num_rows: 2500
})
})
```
Features are ordered as following:
1- Demographic featurs: `'spk', 'age', 'gender', 'utt_name'`
2- Sentence-level featurs: `'audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total'`
3- Word-level featurs: `'words', 'words_accuracy', 'words_stress', 'words_total'`
4- Phoneme-level featurs: `'phones', 'phones_godness'`
```py
>> speechocean762_dataset['train'][0]
```
```py
{'spk': '0001',
'age': 6,
'gender': 'm',
'utt_name': '000010011',
'audio': {'path': '/content/speechocean762/WAVE/SPEAKER0001/000010011.WAV',
'array': array([-9.46044922e-04, -2.38037109e-03, -1.31225586e-03, ...,
-9.15527344e-05, 3.05175781e-04, -2.44140625e-04]),
'sampling_rate': 16000},
'utt_text': 'WE CALL IT BEAR',
'utt_accuracy': 8,
'utt_completeness': 10.0,
'utt_fluency': 9,
'utt_prosodic': 9,
'utt_total': 8,
'words': "['WE', 'CALL', 'IT', 'BEAR']",
'words_accuracy': '[10, 10, 10, 6]',
'words_stress': '[10, 10, 10, 10]',
'words_total': '[10, 10, 10, 6]',
'phones': "[['W', 'IY0'], ['K', 'AO0', 'L'], ['IH0', 'T'], ['B', 'EH0', 'R']]",
'phones_godness': '[[2.0, 2.0], [2.0, 1.8, 1.8], [2.0, 2.0], [2.0, 1.0, 1.0]]'}
```
For word-level features, the 'words' in each sample is a list containing words, while 'words_accuracy', 'words_stress', and 'words_total' are lists of the same length as the words. The mapping is such that the first word corresponds to the first value in 'words_accuracy', and so on. On the other hand, for phoneme-level features, the 'phones' in each sample is a 2D list, with each sublist corresponding to a single word
## Introduction
Pronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.
This corpus aims to provide a free public dataset for the pronunciation scoring task.
Key features:
* It is available for free download for both commercial and non-commercial purposes.
* The speaker variety encompasses young children and adults.
* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.
This corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.
Five experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.
## The scoring metric
The experts score at three levels: phoneme-level, word-level, and sentence-level.
### Phoneme level
Score the pronunciation goodness of each phoneme within the words.
Score range: 0-2
* 2: pronunciation is correct
* 1: pronunciation is right but has a heavy accent
* 0: pronunciation is incorrect or missed
### Word level
Score the accuracy and stress of each word's pronunciation.
#### Accuracy
Score range: 0 - 10
* 10: The pronunciation of the word is perfect
* 7-9: Most phones in this word are pronounced correctly but have accents
* 4-6: Less than 30% of phones in this word are wrongly pronounced
* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word "bag" as "bike"
* 1: The pronunciation is hard to distinguish
* 0: no voice
#### Stress
Score range: {5, 10}
* 10: The stress is correct, or this is a mono-syllable word
* 5: The stress is wrong
### Sentence level
Score the accuracy, fluency, completeness and prosodic at the sentence level.
#### Accuracy
Score range: 0 - 10
* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes
* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes
* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings
* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes
* 0-2: Extremely poor pronunciation and only one or two words are recognizable
#### Completeness
Score range: 0.0 - 1.0
The percentage of the words with good pronunciation.
#### Fluency
Score range: 0 - 10
* 8-10: Fluent without noticeable pauses or stammering
* 6-7: Fluent in general, with a few pauses, repetition, and stammering
* 4-5: the speech is a little influent, with many pauses, repetition, and stammering
* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering
#### Prosodic
Score range: 0 - 10
* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native
* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses
* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm
* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm
* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence
## Data structure
The following tree shows the file structure of this corpus on [github](https://github.com/jimbozhang/speechocean762):
```
├── scores.json
├── scores-detail.json
├── train
│ ├── spk2age
│ ├── spk2gender
│ ├── spk2utt
│ ├── text
│ ├── utt2spk
│ └── wav.scp
├── test
│ ├── spk2age
│ ├── spk2gender
│ ├── spk2utt
│ ├── text
│ ├── utt2spk
│ └── wav.scp
└── WAVE
├── SPEAKER0001
│ ├── 000010011.WAV
│ ├── 000010035.WAV
│ ├── ...
│ └── 000010173.WAV
├── SPEAKER0003
│ ├── 000030012.WAV
│ ├── 000030024.WAV
│ ├── ...
│ └── 000030175.WAV
└── SPEAKER0005
├── 000050003.WAV
├── 000050010.WAV
├── ...
└── 000050175.WAV
```
There are two datasets: `train` and `test`, and both are in Kaldi's data directory style.
The scores are stored in `scores.json`. Here is an example:
```
{
"000010011": { # utt-id
"text": "WE CALL IT BEAR", # transcript text
"accuracy": 8, # sentence-level accuracy score
"completeness": 10.0, # sentence-level completeness score
"fluency": 9, # sentence-level fluency score
"prosodic": 9, # sentence-level prosodic score
"total": 8, # sentence-level total score
"words": [
{
"accuracy": 10, # word-level accuracy score
"stress": 10, # word-level stress score
"total": 10, # word-level total score
"text": "WE", # the word text
"phones": "W IY0", # phones of the word
"phones-accuracy": [2.0, 2.0] # phoneme-level accuracy score
},
{
"accuracy": 10,
"stress": 10,
"total": 10,
"text": "CALL",
"phones": "K AO0 L",
"phones-accuracy": [2.0, 1.8, 1.8]
},
{
"accuracy": 10,
"stress": 10,
"total": 10,
"text": "IT",
"phones": "IH0 T",
"phones-accuracy": [2.0, 2.0]
},
{
"accuracy": 6,
"stress": 10,
"total": 6,
"text": "BEAR",
"phones": "B EH0 R",
"phones-accuracy": [2.0, 1.0, 1.0]
}
]
},
...
}
```
For the phones with an accuracy score lower than 0.5, an extra "mispronunciations" block indicates which phoneme the current phone was actually pronounced.
An example:
```
{
"text": "LISA",
"accuracy": 5,
"phones": ["L", "IY1", "S", "AH0"],
"phones-accuracy": [0.4, 2, 2, 1.2],
"mispronunciations": [
{
"canonical-phone": "L",
"index": 0,
"pronounced-phone": "D"
}
],
"stress": 10,
"total": 6
}
```
The file `scores.json` is processed from `scores-detail.json`.
The two JSON files are almost the same, but `scores-detail.json` has the five experts' original scores, while the scores of scores.json were the average or median scores.
An example item in `scores-detail.json`:
```
{
"000010011": {
"text": "WE CALL IT BEAR",
"accuracy": [7.0, 9.0, 8.0, 8.0, 9.0],
"completeness": [1.0, 1.0, 1.0, 1.0, 1.0],
"fluency": [10.0, 9.0, 8.0, 8.0, 10.0],
"prosodic": [10.0, 9.0, 7.0, 8.0, 9.0],
"total": [7.6, 9.0, 7.9, 8.0, 9.1],
"words": [
{
"accuracy": [10.0, 10.0, 10.0, 10.0, 10.0],
"stress": [10.0, 10.0, 10.0, 10.0, 10.0],
"total": [10.0, 10.0, 10.0, 10.0, 10.0],
"text": "WE",
"ref-phones": "W IY0",
"phones": ["W IY0", "W IY0", "W IY0", "W IY0", "W IY0"]
},
{
"accuracy": [10.0, 8.0, 10.0, 10.0, 8.0],
"stress": [10.0, 10.0, 10.0, 10.0, 10.0],
"total": [10.0, 8.4, 10.0, 10.0, 8.4],
"text": "CALL",
"ref-phones": "K AO0 L",
"phones": ["K AO0 L", "K {AO0} L", "K AO0 L", "K AO0 L", "K AO0 {L}"],
},
{
"accuracy": [10.0, 10.0, 10.0, 10.0, 10.0],
"stress": [10.0, 10.0, 10.0, 10.0, 10.0],
"total": [10.0, 10.0, 10.0, 10.0, 10.0],
"text": "IT",
"ref-phones": "IH0 T",
"phones": ["IH0 T", "IH0 T", "IH0 T", "IH0 T", "IH0 T"]
},
{
"accuracy": [3.0, 7.0, 10.0, 2.0, 6.0],
"stress": [10.0, 10.0, 10.0, 10.0, 10.0],
"phones": ["B (EH0) (R)", "B {EH0} {R}", "B EH0 R", "B (EH0) (R)", "B EH0 [L] R"],
"total": [4.4, 7.6, 10.0, 3.6, 6.8],
"text": "BEAR",
"ref-phones": "B EH0 R"
}
],
},
...
}
```
In `scores-detail.json`, the phoneme-level scores are notated in the following convenient notation:
* for score 2, do not use any symbol
* for score 1, use "{}" symbol
* for score 0, use "()" symbol
* for the inserted phone, use the "[]" symbol
For example, "B (EH) R" means the score of EH is 0 while the scores of B and R are both 2,
"B EH [L] R" mean there is an unexpected phone "L" and the other phones are scored 2.
## Citation
Please cite our paper if you find this work useful:
```bibtex
@inproceedings{zhang2021speechocean762,
title={speechocean762: An Open-Source Non-native English Speech Corpus For Pronunciation Assessment},
author={Zhang, Junbo and Zhang, Zhiwen and Wang, Yongqing and Yan, Zhiyong and Song, Qiong and Huang, Yukai and Li, Ke and Povey, Daniel and Wang, Yujun},
booktitle={Proc. Interspeech 2021},
year={2021}
}
``` | seba3y/speechocean762 | [
"task_categories:audio-classification",
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"pronunciation-scoring",
"L1-Mandarin",
"L2-English",
"region:us"
] | 2023-12-11T09:35:14+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["audio-classification", "automatic-speech-recognition"], "pretty_name": " ", "tags": ["pronunciation-scoring", "L1-Mandarin", "L2-English"], "dataset_info": {"features": [{"name": "spk", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "utt_name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "utt_text", "dtype": "string"}, {"name": "utt_accuracy", "dtype": "int64"}, {"name": "utt_completeness", "dtype": "float64"}, {"name": "utt_fluency", "dtype": "int64"}, {"name": "utt_prosodic", "dtype": "int64"}, {"name": "utt_total", "dtype": "int64"}, {"name": "words", "sequence": "string"}, {"name": "words_accuracy", "sequence": "int64"}, {"name": "words_stress", "sequence": "int64"}, {"name": "words_total", "sequence": "int64"}, {"name": "phones", "sequence": {"sequence": "string"}}, {"name": "phones_godness", "sequence": {"sequence": "float64"}}, {"name": "duration", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 333075617.5, "num_examples": 2500}, {"name": "test", "num_bytes": 311790040.5, "num_examples": 2500}], "download_size": 611757634, "dataset_size": 644865658.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-19T21:54:08+00:00 | [] | [
"en"
] | TAGS
#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #L1-Mandarin #L2-English #region-us
| # speechocean762: A non-native English corpus for pronunciation scoring task
## How to use?
you can load data using
Features are ordered as following:
1- Demographic featurs: ''spk', 'age', 'gender', 'utt_name''
2- Sentence-level featurs: ''audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total''
3- Word-level featurs: ''words', 'words_accuracy', 'words_stress', 'words_total''
4- Phoneme-level featurs: ''phones', 'phones_godness''
For word-level features, the 'words' in each sample is a list containing words, while 'words_accuracy', 'words_stress', and 'words_total' are lists of the same length as the words. The mapping is such that the first word corresponds to the first value in 'words_accuracy', and so on. On the other hand, for phoneme-level features, the 'phones' in each sample is a 2D list, with each sublist corresponding to a single word
## Introduction
Pronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.
This corpus aims to provide a free public dataset for the pronunciation scoring task.
Key features:
* It is available for free download for both commercial and non-commercial purposes.
* The speaker variety encompasses young children and adults.
* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.
This corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.
Five experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.
## The scoring metric
The experts score at three levels: phoneme-level, word-level, and sentence-level.
### Phoneme level
Score the pronunciation goodness of each phoneme within the words.
Score range: 0-2
* 2: pronunciation is correct
* 1: pronunciation is right but has a heavy accent
* 0: pronunciation is incorrect or missed
### Word level
Score the accuracy and stress of each word's pronunciation.
#### Accuracy
Score range: 0 - 10
* 10: The pronunciation of the word is perfect
* 7-9: Most phones in this word are pronounced correctly but have accents
* 4-6: Less than 30% of phones in this word are wrongly pronounced
* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word "bag" as "bike"
* 1: The pronunciation is hard to distinguish
* 0: no voice
#### Stress
Score range: {5, 10}
* 10: The stress is correct, or this is a mono-syllable word
* 5: The stress is wrong
### Sentence level
Score the accuracy, fluency, completeness and prosodic at the sentence level.
#### Accuracy
Score range: 0 - 10
* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes
* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes
* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings
* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes
* 0-2: Extremely poor pronunciation and only one or two words are recognizable
#### Completeness
Score range: 0.0 - 1.0
The percentage of the words with good pronunciation.
#### Fluency
Score range: 0 - 10
* 8-10: Fluent without noticeable pauses or stammering
* 6-7: Fluent in general, with a few pauses, repetition, and stammering
* 4-5: the speech is a little influent, with many pauses, repetition, and stammering
* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering
#### Prosodic
Score range: 0 - 10
* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native
* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses
* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm
* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm
* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence
## Data structure
The following tree shows the file structure of this corpus on github:
There are two datasets: 'train' and 'test', and both are in Kaldi's data directory style.
The scores are stored in 'URL'. Here is an example:
For the phones with an accuracy score lower than 0.5, an extra "mispronunciations" block indicates which phoneme the current phone was actually pronounced.
An example:
The file 'URL' is processed from 'URL'.
The two JSON files are almost the same, but 'URL' has the five experts' original scores, while the scores of URL were the average or median scores.
An example item in 'URL':
In 'URL', the phoneme-level scores are notated in the following convenient notation:
* for score 2, do not use any symbol
* for score 1, use "{}" symbol
* for score 0, use "()" symbol
* for the inserted phone, use the "[]" symbol
For example, "B (EH) R" means the score of EH is 0 while the scores of B and R are both 2,
"B EH [L] R" mean there is an unexpected phone "L" and the other phones are scored 2.
Please cite our paper if you find this work useful:
| [
"# speechocean762: A non-native English corpus for pronunciation scoring task",
"## How to use?\n\nyou can load data using\n\n\n\nFeatures are ordered as following:\n\n1- Demographic featurs: ''spk', 'age', 'gender', 'utt_name''\n\n2- Sentence-level featurs: ''audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total''\n\n3- Word-level featurs: ''words', 'words_accuracy', 'words_stress', 'words_total''\n\n4- Phoneme-level featurs: ''phones', 'phones_godness''\n\n\n\nFor word-level features, the 'words' in each sample is a list containing words, while 'words_accuracy', 'words_stress', and 'words_total' are lists of the same length as the words. The mapping is such that the first word corresponds to the first value in 'words_accuracy', and so on. On the other hand, for phoneme-level features, the 'phones' in each sample is a 2D list, with each sublist corresponding to a single word",
"## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.",
"## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.",
"### Phoneme level\nScore the pronunciation goodness of each phoneme within the words.\n\nScore range: 0-2\n* 2: pronunciation is correct\n* 1: pronunciation is right but has a heavy accent\n* 0: pronunciation is incorrect or missed",
"### Word level\nScore the accuracy and stress of each word's pronunciation.",
"#### Accuracy\nScore range: 0 - 10\n* 10: The pronunciation of the word is perfect\n* 7-9: Most phones in this word are pronounced correctly but have accents\n* 4-6: Less than 30% of phones in this word are wrongly pronounced\n* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word \"bag\" as \"bike\"\n* 1: The pronunciation is hard to distinguish\n* 0: no voice",
"#### Stress\nScore range: {5, 10}\n* 10: The stress is correct, or this is a mono-syllable word\n* 5: The stress is wrong",
"### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.",
"#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable",
"#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation.",
"#### Fluency\nScore range: 0 - 10\n* 8-10: Fluent without noticeable pauses or stammering\n* 6-7: Fluent in general, with a few pauses, repetition, and stammering\n* 4-5: the speech is a little influent, with many pauses, repetition, and stammering\n* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering",
"#### Prosodic\nScore range: 0 - 10\n* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native\n* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses\n* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm\n* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm\n* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence",
"## Data structure\nThe following tree shows the file structure of this corpus on github:\n\n\nThere are two datasets: 'train' and 'test', and both are in Kaldi's data directory style.\n\n\nThe scores are stored in 'URL'. Here is an example:\n\n\n\nFor the phones with an accuracy score lower than 0.5, an extra \"mispronunciations\" block indicates which phoneme the current phone was actually pronounced.\nAn example:\n\n\n\nThe file 'URL' is processed from 'URL'.\nThe two JSON files are almost the same, but 'URL' has the five experts' original scores, while the scores of URL were the average or median scores.\n\nAn example item in 'URL':\n\n\nIn 'URL', the phoneme-level scores are notated in the following convenient notation:\n\n* for score 2, do not use any symbol\n* for score 1, use \"{}\" symbol\n* for score 0, use \"()\" symbol\n* for the inserted phone, use the \"[]\" symbol\n\n\nFor example, \"B (EH) R\" means the score of EH is 0 while the scores of B and R are both 2,\n\"B EH [L] R\" mean there is an unexpected phone \"L\" and the other phones are scored 2.\n\nPlease cite our paper if you find this work useful:"
] | [
"TAGS\n#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #L1-Mandarin #L2-English #region-us \n",
"# speechocean762: A non-native English corpus for pronunciation scoring task",
"## How to use?\n\nyou can load data using\n\n\n\nFeatures are ordered as following:\n\n1- Demographic featurs: ''spk', 'age', 'gender', 'utt_name''\n\n2- Sentence-level featurs: ''audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total''\n\n3- Word-level featurs: ''words', 'words_accuracy', 'words_stress', 'words_total''\n\n4- Phoneme-level featurs: ''phones', 'phones_godness''\n\n\n\nFor word-level features, the 'words' in each sample is a list containing words, while 'words_accuracy', 'words_stress', and 'words_total' are lists of the same length as the words. The mapping is such that the first word corresponds to the first value in 'words_accuracy', and so on. On the other hand, for phoneme-level features, the 'phones' in each sample is a 2D list, with each sublist corresponding to a single word",
"## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.",
"## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.",
"### Phoneme level\nScore the pronunciation goodness of each phoneme within the words.\n\nScore range: 0-2\n* 2: pronunciation is correct\n* 1: pronunciation is right but has a heavy accent\n* 0: pronunciation is incorrect or missed",
"### Word level\nScore the accuracy and stress of each word's pronunciation.",
"#### Accuracy\nScore range: 0 - 10\n* 10: The pronunciation of the word is perfect\n* 7-9: Most phones in this word are pronounced correctly but have accents\n* 4-6: Less than 30% of phones in this word are wrongly pronounced\n* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word \"bag\" as \"bike\"\n* 1: The pronunciation is hard to distinguish\n* 0: no voice",
"#### Stress\nScore range: {5, 10}\n* 10: The stress is correct, or this is a mono-syllable word\n* 5: The stress is wrong",
"### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.",
"#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable",
"#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation.",
"#### Fluency\nScore range: 0 - 10\n* 8-10: Fluent without noticeable pauses or stammering\n* 6-7: Fluent in general, with a few pauses, repetition, and stammering\n* 4-5: the speech is a little influent, with many pauses, repetition, and stammering\n* 0-3: intermittent, very influent speech, with lots of pauses, repetition, and stammering",
"#### Prosodic\nScore range: 0 - 10\n* 9-10: Correct intonation at a stable speaking speed, speak with cadence, and can speak like a native\n* 7-8: Nearly correct intonation at a stable speaking speed, nearly smooth and coherent, but with little stammering and few pauses\n* 5-6: Unstable speech speed, many stammering and pauses with a poor sense of rhythm\n* 3-4: Unstable speech speed, speak too fast or too slow, without the sense of rhythm\n* 0-2: Poor intonation and lots of stammering and pauses, unable to read a complete sentence",
"## Data structure\nThe following tree shows the file structure of this corpus on github:\n\n\nThere are two datasets: 'train' and 'test', and both are in Kaldi's data directory style.\n\n\nThe scores are stored in 'URL'. Here is an example:\n\n\n\nFor the phones with an accuracy score lower than 0.5, an extra \"mispronunciations\" block indicates which phoneme the current phone was actually pronounced.\nAn example:\n\n\n\nThe file 'URL' is processed from 'URL'.\nThe two JSON files are almost the same, but 'URL' has the five experts' original scores, while the scores of URL were the average or median scores.\n\nAn example item in 'URL':\n\n\nIn 'URL', the phoneme-level scores are notated in the following convenient notation:\n\n* for score 2, do not use any symbol\n* for score 1, use \"{}\" symbol\n* for score 0, use \"()\" symbol\n* for the inserted phone, use the \"[]\" symbol\n\n\nFor example, \"B (EH) R\" means the score of EH is 0 while the scores of B and R are both 2,\n\"B EH [L] R\" mean there is an unexpected phone \"L\" and the other phones are scored 2.\n\nPlease cite our paper if you find this work useful:"
] | [
76,
19,
288,
213,
27,
49,
19,
129,
36,
26,
133,
20,
94,
137,
296
] | [
"passage: TAGS\n#task_categories-audio-classification #task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #pronunciation-scoring #L1-Mandarin #L2-English #region-us \n# speechocean762: A non-native English corpus for pronunciation scoring task## How to use?\n\nyou can load data using\n\n\n\nFeatures are ordered as following:\n\n1- Demographic featurs: ''spk', 'age', 'gender', 'utt_name''\n\n2- Sentence-level featurs: ''audio', 'utt_text', 'utt_accuracy', 'utt_completeness', 'utt_fluency', 'utt_prosodic', 'utt_total''\n\n3- Word-level featurs: ''words', 'words_accuracy', 'words_stress', 'words_total''\n\n4- Phoneme-level featurs: ''phones', 'phones_godness''\n\n\n\nFor word-level features, the 'words' in each sample is a list containing words, while 'words_accuracy', 'words_stress', and 'words_total' are lists of the same length as the words. The mapping is such that the first word corresponds to the first value in 'words_accuracy', and so on. On the other hand, for phoneme-level features, the 'phones' in each sample is a 2D list, with each sublist corresponding to a single word",
"passage: ## Introduction\nPronunciation scoring is a crucial technology in computer-assisted language learning (CALL) systems. The pronunciation quality scores might be given at phoneme-level, word-level, and sentence-level for a typical pronunciation scoring task.\n\nThis corpus aims to provide a free public dataset for the pronunciation scoring task.\nKey features:\n* It is available for free download for both commercial and non-commercial purposes.\n* The speaker variety encompasses young children and adults.\n* The manual annotations are in multiple aspects at sentence-level, word-level and phoneme-level.\n\nThis corpus consists of 5000 English sentences. All the speakers are non-native, and their mother tongue is Mandarin. Half of the speakers are Children, and the others are adults. The information of age and gender are provided.\n\nFive experts made the scores. To avoid subjective bias, each expert scores independently under the same metric.## The scoring metric\nThe experts score at three levels: phoneme-level, word-level, and sentence-level.### Phoneme level\nScore the pronunciation goodness of each phoneme within the words.\n\nScore range: 0-2\n* 2: pronunciation is correct\n* 1: pronunciation is right but has a heavy accent\n* 0: pronunciation is incorrect or missed### Word level\nScore the accuracy and stress of each word's pronunciation.#### Accuracy\nScore range: 0 - 10\n* 10: The pronunciation of the word is perfect\n* 7-9: Most phones in this word are pronounced correctly but have accents\n* 4-6: Less than 30% of phones in this word are wrongly pronounced\n* 2-3: More than 30% of phones in this word are wrongly pronounced. In another case, the word is mispronounced as some other word. For example, the student mispronounced the word \"bag\" as \"bike\"\n* 1: The pronunciation is hard to distinguish\n* 0: no voice#### Stress\nScore range: {5, 10}\n* 10: The stress is correct, or this is a mono-syllable word\n* 5: The stress is wrong### Sentence level\nScore the accuracy, fluency, completeness and prosodic at the sentence level.#### Accuracy\nScore range: 0 - 10\n* 9-10: The overall pronunciation of the sentence is excellent, with accurate phonology and no obvious pronunciation mistakes\n* 7-8: The overall pronunciation of the sentence is good, with a few pronunciation mistakes\n* 5-6: The overall pronunciation of the sentence is understandable, with many pronunciation mistakes and accent, but it does not affect the understanding of basic meanings\n* 3-4: Poor, clumsy and rigid pronunciation of the sentence as a whole, with serious pronunciation mistakes\n* 0-2: Extremely poor pronunciation and only one or two words are recognizable#### Completeness\nScore range: 0.0 - 1.0\nThe percentage of the words with good pronunciation."
] |
7c106c20426415e7bf0e0bedbc8d0095bb94d4d0 |
# Dataset Card for Evaluation run of Weyaxi/Seraph-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/Seraph-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__Seraph-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T09:44:37.311244](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Seraph-7B/blob/main/results_2023-12-11T09-44-37.311244.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6548171567091998,
"acc_stderr": 0.031923676546826464,
"acc_norm": 0.6547760288690921,
"acc_norm_stderr": 0.03258255753948947,
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.5948960816711865,
"mc2_stderr": 0.015146045918500203
},
"harness|arc:challenge|25": {
"acc": 0.6544368600682594,
"acc_stderr": 0.013896938461145683,
"acc_norm": 0.6783276450511946,
"acc_norm_stderr": 0.013650488084494166
},
"harness|hellaswag|10": {
"acc": 0.6727743477394941,
"acc_stderr": 0.004682414968323629,
"acc_norm": 0.8621788488348935,
"acc_norm_stderr": 0.003440076775300575
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7302631578947368,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.7302631578947368,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.02794321998933714,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.02794321998933714
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5957446808510638,
"acc_stderr": 0.03208115750788684,
"acc_norm": 0.5957446808510638,
"acc_norm_stderr": 0.03208115750788684
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.02554284681740051,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.02554284681740051
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.028972648884844267,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.028972648884844267
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8605504587155963,
"acc_stderr": 0.014852421490033053,
"acc_norm": 0.8605504587155963,
"acc_norm_stderr": 0.014852421490033053
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240634,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240634
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233494,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233494
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.03498149385462472,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.03498149385462472
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.039578354719809805,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.039578354719809805
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8288633461047255,
"acc_stderr": 0.013468201614066302,
"acc_norm": 0.8288633461047255,
"acc_norm_stderr": 0.013468201614066302
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39106145251396646,
"acc_stderr": 0.01632076376380838,
"acc_norm": 0.39106145251396646,
"acc_norm_stderr": 0.01632076376380838
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292452,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292452
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7561728395061729,
"acc_stderr": 0.023891879541959607,
"acc_norm": 0.7561728395061729,
"acc_norm_stderr": 0.023891879541959607
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46936114732724904,
"acc_stderr": 0.012746237711716634,
"acc_norm": 0.46936114732724904,
"acc_norm_stderr": 0.012746237711716634
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.02806499816704009,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.02806499816704009
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.01879808628488689,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.01879808628488689
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4283965728274174,
"mc1_stderr": 0.017323088597314754,
"mc2": 0.5948960816711865,
"mc2_stderr": 0.015146045918500203
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.011099796645920522
},
"harness|gsm8k|5": {
"acc": 0.7187263078089462,
"acc_stderr": 0.012384789310940244
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Weyaxi__Seraph-7B | [
"region:us"
] | 2023-12-11T09:47:29+00:00 | {"pretty_name": "Evaluation run of Weyaxi/Seraph-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Seraph-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T09:44:37.311244](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Seraph-7B/blob/main/results_2023-12-11T09-44-37.311244.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6548171567091998,\n \"acc_stderr\": 0.031923676546826464,\n \"acc_norm\": 0.6547760288690921,\n \"acc_norm_stderr\": 0.03258255753948947,\n \"mc1\": 0.4283965728274174,\n \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.5948960816711865,\n \"mc2_stderr\": 0.015146045918500203\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6544368600682594,\n \"acc_stderr\": 0.013896938461145683,\n \"acc_norm\": 0.6783276450511946,\n \"acc_norm_stderr\": 0.013650488084494166\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6727743477394941,\n \"acc_stderr\": 0.004682414968323629,\n \"acc_norm\": 0.8621788488348935,\n \"acc_norm_stderr\": 0.003440076775300575\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7302631578947368,\n \"acc_stderr\": 0.03611780560284898,\n \"acc_norm\": 0.7302631578947368,\n \"acc_norm_stderr\": 0.03611780560284898\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.02794321998933714,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.02794321998933714\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.03208115750788684,\n \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.03208115750788684\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.02554284681740051,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.02554284681740051\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34444444444444444,\n \"acc_stderr\": 0.028972648884844267,\n \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.028972648884844267\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188704,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188704\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8605504587155963,\n \"acc_stderr\": 0.014852421490033053,\n \"acc_norm\": 0.8605504587155963,\n \"acc_norm_stderr\": 0.014852421490033053\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240634,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240634\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462472,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462472\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n \"acc_stderr\": 0.013468201614066302,\n \"acc_norm\": 0.8288633461047255,\n \"acc_norm_stderr\": 0.013468201614066302\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39106145251396646,\n \"acc_stderr\": 0.01632076376380838,\n \"acc_norm\": 0.39106145251396646,\n \"acc_norm_stderr\": 0.01632076376380838\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292452,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292452\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7561728395061729,\n \"acc_stderr\": 0.023891879541959607,\n \"acc_norm\": 0.7561728395061729,\n \"acc_norm_stderr\": 0.023891879541959607\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46936114732724904,\n \"acc_stderr\": 0.012746237711716634,\n \"acc_norm\": 0.46936114732724904,\n \"acc_norm_stderr\": 0.012746237711716634\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.01879808628488689,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.01879808628488689\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4283965728274174,\n \"mc1_stderr\": 0.017323088597314754,\n \"mc2\": 0.5948960816711865,\n \"mc2_stderr\": 0.015146045918500203\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.011099796645920522\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7187263078089462,\n \"acc_stderr\": 0.012384789310940244\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/Seraph-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|arc:challenge|25_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|gsm8k|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hellaswag|10_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T09-44-37.311244.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["**/details_harness|winogrande|5_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T09-44-37.311244.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T09_44_37.311244", "path": ["results_2023-12-11T09-44-37.311244.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T09-44-37.311244.parquet"]}]}]} | 2023-12-11T09:48:10+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Weyaxi/Seraph-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Weyaxi/Seraph-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T09:44:37.311244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Weyaxi/Seraph-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Seraph-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T09:44:37.311244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Weyaxi/Seraph-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Seraph-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T09:44:37.311244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
18,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/Seraph-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Weyaxi/Seraph-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T09:44:37.311244(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
63805c054e51fecd8c6c1515e5f5dab1cd33248c | # Dataset Card for "autotrain-data-TM12Topics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gbwsolutions/autotrain-data-TM12Topics | [
"region:us"
] | 2023-12-11T10:07:05+00:00 | {"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}, {"name": "autotrain_label", "dtype": {"class_label": {"names": {"0": "Accuracy", "1": "Cleanliness", "2": "Delivery", "3": "Menu", "4": "Other", "5": "Price", "6": "Quality", "7": "Service", "8": "Speed", "9": "Staff", "10": "Taste", "11": "Technology"}}}}], "splits": [{"name": "train", "num_bytes": 43060, "num_examples": 480}, {"name": "validation", "num_bytes": 9818, "num_examples": 120}], "download_size": 36597, "dataset_size": 52878}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-11T10:07:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "autotrain-data-TM12Topics"
More Information needed | [
"# Dataset Card for \"autotrain-data-TM12Topics\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-TM12Topics\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-TM12Topics\"\n\nMore Information needed"
] |
028fcba74e39028ca15f90d7d00ac3b501a33f94 | # Flan-GPT4 Dataset
## Overview
The Flan-GPT4 dataset is a collection of prompts and responses designed for training and evaluating language generation models. It contains various features such as response, instruction, system, toxin_prompt, and llama_prompt, each with a data type of string.
Edited and customized from `SlimOrca-Flan`
## Dataset Information
- **Features:**
- response (string)
- instruction (string)
- system (string)
- toxin_prompt (string)
- llama_prompt (string)
- **Splits:**
- Train:
- Number of examples: 724,248
- Size: 4,093,492,977 bytes
## Intended Use
This dataset is intended for training and evaluating language generation models, particularly those focused on natural language processing and text generation tasks.
| erfanzar/Flan-GPT4 | [
"region:us"
] | 2023-12-11T10:08:42+00:00 | {"dataset_info": {"features": [{"name": "response", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "toxin_prompt", "dtype": "string"}, {"name": "llama_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4093492977, "num_examples": 724248}], "download_size": 2266772484, "dataset_size": 4093492977}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T12:08:51+00:00 | [] | [] | TAGS
#region-us
| # Flan-GPT4 Dataset
## Overview
The Flan-GPT4 dataset is a collection of prompts and responses designed for training and evaluating language generation models. It contains various features such as response, instruction, system, toxin_prompt, and llama_prompt, each with a data type of string.
Edited and customized from 'SlimOrca-Flan'
## Dataset Information
- Features:
- response (string)
- instruction (string)
- system (string)
- toxin_prompt (string)
- llama_prompt (string)
- Splits:
- Train:
- Number of examples: 724,248
- Size: 4,093,492,977 bytes
## Intended Use
This dataset is intended for training and evaluating language generation models, particularly those focused on natural language processing and text generation tasks.
| [
"# Flan-GPT4 Dataset",
"## Overview\nThe Flan-GPT4 dataset is a collection of prompts and responses designed for training and evaluating language generation models. It contains various features such as response, instruction, system, toxin_prompt, and llama_prompt, each with a data type of string.\n\n\nEdited and customized from 'SlimOrca-Flan'",
"## Dataset Information\n- Features:\n - response (string)\n - instruction (string)\n - system (string)\n - toxin_prompt (string)\n - llama_prompt (string)\n\n- Splits:\n - Train:\n - Number of examples: 724,248\n - Size: 4,093,492,977 bytes",
"## Intended Use\nThis dataset is intended for training and evaluating language generation models, particularly those focused on natural language processing and text generation tasks."
] | [
"TAGS\n#region-us \n",
"# Flan-GPT4 Dataset",
"## Overview\nThe Flan-GPT4 dataset is a collection of prompts and responses designed for training and evaluating language generation models. It contains various features such as response, instruction, system, toxin_prompt, and llama_prompt, each with a data type of string.\n\n\nEdited and customized from 'SlimOrca-Flan'",
"## Dataset Information\n- Features:\n - response (string)\n - instruction (string)\n - system (string)\n - toxin_prompt (string)\n - llama_prompt (string)\n\n- Splits:\n - Train:\n - Number of examples: 724,248\n - Size: 4,093,492,977 bytes",
"## Intended Use\nThis dataset is intended for training and evaluating language generation models, particularly those focused on natural language processing and text generation tasks."
] | [
6,
9,
81,
71,
33
] | [
"passage: TAGS\n#region-us \n# Flan-GPT4 Dataset## Overview\nThe Flan-GPT4 dataset is a collection of prompts and responses designed for training and evaluating language generation models. It contains various features such as response, instruction, system, toxin_prompt, and llama_prompt, each with a data type of string.\n\n\nEdited and customized from 'SlimOrca-Flan'## Dataset Information\n- Features:\n - response (string)\n - instruction (string)\n - system (string)\n - toxin_prompt (string)\n - llama_prompt (string)\n\n- Splits:\n - Train:\n - Number of examples: 724,248\n - Size: 4,093,492,977 bytes## Intended Use\nThis dataset is intended for training and evaluating language generation models, particularly those focused on natural language processing and text generation tasks."
] |
87b7edab673f7dbda7e9400ee500d19c48e7ba9c | # Dataset Card for "augmented-vsr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ktennyson6/augmented-vsr | [
"region:us"
] | 2023-12-11T10:10:01+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "relation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 897911834.0, "num_examples": 6237}], "download_size": 886692853, "dataset_size": 897911834.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-11T14:59:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "augmented-vsr"
More Information needed | [
"# Dataset Card for \"augmented-vsr\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"augmented-vsr\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"augmented-vsr\"\n\nMore Information needed"
] |
f83f47e0278ad5c991003e8a87d004fc53417655 | # Dataset Card for "wikitext-103-raw-v1-para-permute-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/wikitext-103-raw-v1-para-permute-1 | [
"region:us"
] | 2023-12-11T10:22:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1093001894, "num_examples": 3602699}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}, {"name": "test", "num_bytes": 1305088, "num_examples": 4358}], "download_size": 629835482, "dataset_size": 1095466270}} | 2023-12-11T10:23:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikitext-103-raw-v1-para-permute-1"
More Information needed | [
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-1\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1-para-permute-1\"\n\nMore Information needed"
] |
f4a020987649508953bb1a771f5530192a2107f7 | # Dataset Card for "wikitext-103-raw-v1-para-permute-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/wikitext-103-raw-v1-para-permute-3 | [
"region:us"
] | 2023-12-11T10:23:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2186003784, "num_examples": 7205397}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}, {"name": "test", "num_bytes": 1305088, "num_examples": 4358}], "download_size": 1258725365, "dataset_size": 2188468160}} | 2023-12-11T10:25:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikitext-103-raw-v1-para-permute-3"
More Information needed | [
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-3\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1-para-permute-3\"\n\nMore Information needed"
] |
6a3a49659ebee981b9faed8ccd82b6976552fafc | # Dataset Card for "wikitext-103-raw-v1-para-permute-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/wikitext-103-raw-v1-para-permute-5 | [
"region:us"
] | 2023-12-11T10:25:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3279005674, "num_examples": 10808095}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}, {"name": "test", "num_bytes": 1305088, "num_examples": 4358}], "download_size": 1887425635, "dataset_size": 3281470050}} | 2023-12-11T10:27:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikitext-103-raw-v1-para-permute-5"
More Information needed | [
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-5\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1-para-permute-5\"\n\nMore Information needed"
] |
d60fe52cdc4435d776720f0c00b41e15c40eaa1d | # Dataset Card for "wikitext-103-raw-v1-para-permute-9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/wikitext-103-raw-v1-para-permute-9 | [
"region:us"
] | 2023-12-11T10:27:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5465009454, "num_examples": 18013491}, {"name": "validation", "num_bytes": 1159288, "num_examples": 3760}, {"name": "test", "num_bytes": 1305088, "num_examples": 4358}], "download_size": 3144957904, "dataset_size": 5467473830}} | 2023-12-11T10:31:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikitext-103-raw-v1-para-permute-9"
More Information needed | [
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-9\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikitext-103-raw-v1-para-permute-9\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikitext-103-raw-v1-para-permute-9\"\n\nMore Information needed"
] |
5cb9c4a20539def66738c4f9ecf8e80addf10845 |
# Dataset Card for Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [v1olet/v1olet_marcoroni-go-bruins-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_v1olet__v1olet_marcoroni-go-bruins-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-11T10:41:40.851044](https://huggingface.co/datasets/open-llm-leaderboard/details_v1olet__v1olet_marcoroni-go-bruins-7B/blob/main/results_2023-12-11T10-41-40.851044.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.25127863782291743,
"acc_stderr": 0.03070039497833563,
"acc_norm": 0.25216695780462783,
"acc_norm_stderr": 0.03151848805684111,
"mc1": 0.2350061199510404,
"mc1_stderr": 0.014843061507731608,
"mc2": NaN,
"mc2_stderr": NaN
},
"harness|arc:challenge|25": {
"acc": 0.23890784982935154,
"acc_stderr": 0.012461071376316614,
"acc_norm": 0.2909556313993174,
"acc_norm_stderr": 0.01327307786590759
},
"harness|hellaswag|10": {
"acc": 0.2642899820752838,
"acc_stderr": 0.004400532188550212,
"acc_norm": 0.2830113523202549,
"acc_norm_stderr": 0.0044954128683246065
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.03944624162501116,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.03944624162501116
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.29605263157894735,
"acc_stderr": 0.03715062154998905,
"acc_norm": 0.29605263157894735,
"acc_norm_stderr": 0.03715062154998905
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.17,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.17,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.025757559893106723,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.025757559893106723
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2361111111111111,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.2361111111111111,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.031568093627031744,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.031568093627031744
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.04576665403207762,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.04576665403207762
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2170212765957447,
"acc_stderr": 0.02694748312149624,
"acc_norm": 0.2170212765957447,
"acc_norm_stderr": 0.02694748312149624
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813344,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813344
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.27586206896551724,
"acc_stderr": 0.03724563619774632,
"acc_norm": 0.27586206896551724,
"acc_norm_stderr": 0.03724563619774632
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.022418042891113932,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.022418042891113932
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.03893259610604674,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.03893259610604674
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.22,
"acc_stderr": 0.0416333199893227,
"acc_norm": 0.22,
"acc_norm_stderr": 0.0416333199893227
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.23870967741935484,
"acc_stderr": 0.02425107126220884,
"acc_norm": 0.23870967741935484,
"acc_norm_stderr": 0.02425107126220884
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132976984,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132976984
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2909090909090909,
"acc_stderr": 0.03546563019624336,
"acc_norm": 0.2909090909090909,
"acc_norm_stderr": 0.03546563019624336
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2878787878787879,
"acc_stderr": 0.03225883512300993,
"acc_norm": 0.2878787878787879,
"acc_norm_stderr": 0.03225883512300993
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22279792746113988,
"acc_stderr": 0.03003114797764154,
"acc_norm": 0.22279792746113988,
"acc_norm_stderr": 0.03003114797764154
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2846153846153846,
"acc_stderr": 0.022878322799706283,
"acc_norm": 0.2846153846153846,
"acc_norm_stderr": 0.022878322799706283
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.026202766534652148,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.026202766534652148
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.28991596638655465,
"acc_stderr": 0.029472485833136077,
"acc_norm": 0.28991596638655465,
"acc_norm_stderr": 0.029472485833136077
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.036313298039696545,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.036313298039696545
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.29908256880733947,
"acc_stderr": 0.01963041728541517,
"acc_norm": 0.29908256880733947,
"acc_norm_stderr": 0.01963041728541517
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.028765111718046944,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.028765111718046944
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.03114557065948678,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.03114557065948678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.25738396624472576,
"acc_stderr": 0.028458820991460302,
"acc_norm": 0.25738396624472576,
"acc_norm_stderr": 0.028458820991460302
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.17937219730941703,
"acc_stderr": 0.0257498195691928,
"acc_norm": 0.17937219730941703,
"acc_norm_stderr": 0.0257498195691928
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.25190839694656486,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.25190839694656486,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3128834355828221,
"acc_stderr": 0.036429145782924055,
"acc_norm": 0.3128834355828221,
"acc_norm_stderr": 0.036429145782924055
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.04327040932578728,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.04327040932578728
},
"harness|hendrycksTest-management|5": {
"acc": 0.2621359223300971,
"acc_stderr": 0.04354631077260597,
"acc_norm": 0.2621359223300971,
"acc_norm_stderr": 0.04354631077260597
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.02934311479809447,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.02934311479809447
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2554278416347382,
"acc_stderr": 0.015594955384455765,
"acc_norm": 0.2554278416347382,
"acc_norm_stderr": 0.015594955384455765
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.26878612716763006,
"acc_stderr": 0.023868003262500104,
"acc_norm": 0.26878612716763006,
"acc_norm_stderr": 0.023868003262500104
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3039106145251397,
"acc_stderr": 0.015382845587584518,
"acc_norm": 0.3039106145251397,
"acc_norm_stderr": 0.015382845587584518
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.17973856209150327,
"acc_stderr": 0.021986032182064148,
"acc_norm": 0.17973856209150327,
"acc_norm_stderr": 0.021986032182064148
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2572347266881029,
"acc_stderr": 0.024826171289250888,
"acc_norm": 0.2572347266881029,
"acc_norm_stderr": 0.024826171289250888
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.24691358024691357,
"acc_stderr": 0.023993501709042103,
"acc_norm": 0.24691358024691357,
"acc_norm_stderr": 0.023993501709042103
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2127659574468085,
"acc_stderr": 0.024414612974307706,
"acc_norm": 0.2127659574468085,
"acc_norm_stderr": 0.024414612974307706
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.25358539765319427,
"acc_stderr": 0.011111715336101145,
"acc_norm": 0.25358539765319427,
"acc_norm_stderr": 0.011111715336101145
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.025767252010855963,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.025767252010855963
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2434640522875817,
"acc_stderr": 0.017362473762146637,
"acc_norm": 0.2434640522875817,
"acc_norm_stderr": 0.017362473762146637
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.23636363636363636,
"acc_stderr": 0.040693063197213754,
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.040693063197213754
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.22448979591836735,
"acc_stderr": 0.026711430555538398,
"acc_norm": 0.22448979591836735,
"acc_norm_stderr": 0.026711430555538398
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.16,
"acc_stderr": 0.0368452949177471,
"acc_norm": 0.16,
"acc_norm_stderr": 0.0368452949177471
},
"harness|hendrycksTest-virology|5": {
"acc": 0.19879518072289157,
"acc_stderr": 0.031069390260789427,
"acc_norm": 0.19879518072289157,
"acc_norm_stderr": 0.031069390260789427
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.25146198830409355,
"acc_stderr": 0.033275044238468436,
"acc_norm": 0.25146198830409355,
"acc_norm_stderr": 0.033275044238468436
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2350061199510404,
"mc1_stderr": 0.014843061507731608,
"mc2": NaN,
"mc2_stderr": NaN
},
"harness|winogrande|5": {
"acc": 0.5209155485398579,
"acc_stderr": 0.014040185494212938
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_v1olet__v1olet_marcoroni-go-bruins-7B | [
"region:us"
] | 2023-12-11T10:44:34+00:00 | {"pretty_name": "Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [v1olet/v1olet_marcoroni-go-bruins-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_v1olet__v1olet_marcoroni-go-bruins-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-11T10:41:40.851044](https://huggingface.co/datasets/open-llm-leaderboard/details_v1olet__v1olet_marcoroni-go-bruins-7B/blob/main/results_2023-12-11T10-41-40.851044.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25127863782291743,\n \"acc_stderr\": 0.03070039497833563,\n \"acc_norm\": 0.25216695780462783,\n \"acc_norm_stderr\": 0.03151848805684111,\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.014843061507731608,\n \"mc2\": NaN,\n \"mc2_stderr\": NaN\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.23890784982935154,\n \"acc_stderr\": 0.012461071376316614,\n \"acc_norm\": 0.2909556313993174,\n \"acc_norm_stderr\": 0.01327307786590759\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2642899820752838,\n \"acc_stderr\": 0.004400532188550212,\n \"acc_norm\": 0.2830113523202549,\n \"acc_norm_stderr\": 0.0044954128683246065\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.03944624162501116,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.03944624162501116\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.29605263157894735,\n \"acc_stderr\": 0.03715062154998905,\n \"acc_norm\": 0.29605263157894735,\n \"acc_norm_stderr\": 0.03715062154998905\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106723,\n \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106723\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2361111111111111,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.2361111111111111,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207762,\n \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207762\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2170212765957447,\n \"acc_stderr\": 0.02694748312149624,\n \"acc_norm\": 0.2170212765957447,\n \"acc_norm_stderr\": 0.02694748312149624\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813344,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813344\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.03724563619774632,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.03724563619774632\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.25396825396825395,\n \"acc_stderr\": 0.022418042891113932,\n \"acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.022418042891113932\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.25396825396825395,\n \"acc_stderr\": 0.03893259610604674,\n \"acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.03893259610604674\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.23870967741935484,\n \"acc_stderr\": 0.02425107126220884,\n \"acc_norm\": 0.23870967741935484,\n \"acc_norm_stderr\": 0.02425107126220884\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2909090909090909,\n \"acc_stderr\": 0.03546563019624336,\n \"acc_norm\": 0.2909090909090909,\n \"acc_norm_stderr\": 0.03546563019624336\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.2878787878787879,\n \"acc_stderr\": 0.03225883512300993,\n \"acc_norm\": 0.2878787878787879,\n \"acc_norm_stderr\": 0.03225883512300993\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.22279792746113988,\n \"acc_stderr\": 0.03003114797764154,\n \"acc_norm\": 0.22279792746113988,\n \"acc_norm_stderr\": 0.03003114797764154\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2846153846153846,\n \"acc_stderr\": 0.022878322799706283,\n \"acc_norm\": 0.2846153846153846,\n \"acc_norm_stderr\": 0.022878322799706283\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.026202766534652148,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.026202766534652148\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.28991596638655465,\n \"acc_stderr\": 0.029472485833136077,\n \"acc_norm\": 0.28991596638655465,\n \"acc_norm_stderr\": 0.029472485833136077\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.271523178807947,\n \"acc_stderr\": 0.036313298039696545,\n \"acc_norm\": 0.271523178807947,\n \"acc_norm_stderr\": 0.036313298039696545\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.29908256880733947,\n \"acc_stderr\": 0.01963041728541517,\n \"acc_norm\": 0.29908256880733947,\n \"acc_norm_stderr\": 0.01963041728541517\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.028765111718046944,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.028765111718046944\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2696078431372549,\n \"acc_stderr\": 0.03114557065948678,\n \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.03114557065948678\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.25738396624472576,\n \"acc_stderr\": 0.028458820991460302,\n \"acc_norm\": 0.25738396624472576,\n \"acc_norm_stderr\": 0.028458820991460302\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.17937219730941703,\n \"acc_stderr\": 0.0257498195691928,\n \"acc_norm\": 0.17937219730941703,\n \"acc_norm_stderr\": 0.0257498195691928\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.25190839694656486,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.25190839694656486,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3128834355828221,\n \"acc_stderr\": 0.036429145782924055,\n \"acc_norm\": 0.3128834355828221,\n \"acc_norm_stderr\": 0.036429145782924055\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n \"acc_stderr\": 0.04327040932578728,\n \"acc_norm\": 0.29464285714285715,\n \"acc_norm_stderr\": 0.04327040932578728\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2621359223300971,\n \"acc_stderr\": 0.04354631077260597,\n \"acc_norm\": 0.2621359223300971,\n \"acc_norm_stderr\": 0.04354631077260597\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.02934311479809447,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.02934311479809447\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2554278416347382,\n \"acc_stderr\": 0.015594955384455765,\n \"acc_norm\": 0.2554278416347382,\n \"acc_norm_stderr\": 0.015594955384455765\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.26878612716763006,\n \"acc_stderr\": 0.023868003262500104,\n \"acc_norm\": 0.26878612716763006,\n \"acc_norm_stderr\": 0.023868003262500104\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3039106145251397,\n \"acc_stderr\": 0.015382845587584518,\n \"acc_norm\": 0.3039106145251397,\n \"acc_norm_stderr\": 0.015382845587584518\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.17973856209150327,\n \"acc_stderr\": 0.021986032182064148,\n \"acc_norm\": 0.17973856209150327,\n \"acc_norm_stderr\": 0.021986032182064148\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2572347266881029,\n \"acc_stderr\": 0.024826171289250888,\n \"acc_norm\": 0.2572347266881029,\n \"acc_norm_stderr\": 0.024826171289250888\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.24691358024691357,\n \"acc_stderr\": 0.023993501709042103,\n \"acc_norm\": 0.24691358024691357,\n \"acc_norm_stderr\": 0.023993501709042103\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2127659574468085,\n \"acc_stderr\": 0.024414612974307706,\n \"acc_norm\": 0.2127659574468085,\n \"acc_norm_stderr\": 0.024414612974307706\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.25358539765319427,\n \"acc_stderr\": 0.011111715336101145,\n \"acc_norm\": 0.25358539765319427,\n \"acc_norm_stderr\": 0.011111715336101145\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.025767252010855963,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.025767252010855963\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2434640522875817,\n \"acc_stderr\": 0.017362473762146637,\n \"acc_norm\": 0.2434640522875817,\n \"acc_norm_stderr\": 0.017362473762146637\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.23636363636363636,\n \"acc_stderr\": 0.040693063197213754,\n \"acc_norm\": 0.23636363636363636,\n \"acc_norm_stderr\": 0.040693063197213754\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.22448979591836735,\n \"acc_stderr\": 0.026711430555538398,\n \"acc_norm\": 0.22448979591836735,\n \"acc_norm_stderr\": 0.026711430555538398\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.16,\n \"acc_stderr\": 0.0368452949177471,\n \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.0368452949177471\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.19879518072289157,\n \"acc_stderr\": 0.031069390260789427,\n \"acc_norm\": 0.19879518072289157,\n \"acc_norm_stderr\": 0.031069390260789427\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.25146198830409355,\n \"acc_stderr\": 0.033275044238468436,\n \"acc_norm\": 0.25146198830409355,\n \"acc_norm_stderr\": 0.033275044238468436\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.014843061507731608,\n \"mc2\": NaN,\n \"mc2_stderr\": NaN\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5209155485398579,\n \"acc_stderr\": 0.014040185494212938\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|arc:challenge|25_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|gsm8k|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hellaswag|10_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-11T10-41-40.851044.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["**/details_harness|winogrande|5_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-11T10-41-40.851044.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_11T10_41_40.851044", "path": ["results_2023-12-11T10-41-40.851044.parquet"]}, {"split": "latest", "path": ["results_2023-12-11T10-41-40.851044.parquet"]}]}]} | 2023-12-11T10:45:17+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model v1olet/v1olet_marcoroni-go-bruins-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-11T10:41:40.851044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model v1olet/v1olet_marcoroni-go-bruins-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T10:41:40.851044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model v1olet/v1olet_marcoroni-go-bruins-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-11T10:41:40.851044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
6,
29,
31,
178,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of v1olet/v1olet_marcoroni-go-bruins-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model v1olet/v1olet_marcoroni-go-bruins-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-11T10:41:40.851044(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.