sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
listlengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
listlengths
0
25
languages
listlengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
listlengths
0
352
processed_texts
listlengths
1
353
tokens_length
listlengths
1
353
input_texts
listlengths
1
40
7c281a1a4b5a104089b43f4b07528704249e1408
# Dataset Card for "russia-ukraine-cnbc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
orgcatorg/russia-ukraine-cnbc
[ "region:us" ]
2023-10-15T12:27:37+00:00
{"dataset_info": {"features": [{"name": "@type", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dateModified", "dtype": "string"}, {"name": "datePublished", "dtype": "string"}, {"name": "mainEntityOfPage", "dtype": "string"}, {"name": "articleBody", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "thumbnailUrl", "dtype": "string"}, {"name": "video", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6035507, "num_examples": 2757}], "download_size": 0, "dataset_size": 6035507}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T19:11:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "russia-ukraine-cnbc" More Information needed
[ "# Dataset Card for \"russia-ukraine-cnbc\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"russia-ukraine-cnbc\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"russia-ukraine-cnbc\"\n\nMore Information needed" ]
d1b0a42bcdd71ea0e3083e751e5bf03110b91d2e
# Dataset Card for "gpt4v-emotion-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TwoAbove/gpt4v-emotion-dataset
[ "region:us" ]
2023-10-15T12:31:48+00:00
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "link", "dtype": "string"}, {"name": "message_id", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51420737.0, "num_examples": 48}], "download_size": 51389687, "dataset_size": 51420737.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-15T22:57:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gpt4v-emotion-dataset" More Information needed
[ "# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
c8a43fed10986b38d9b29b074ff42c13a5a75b0c
# Project Gutenberg Open Audiobook Collection Source: <https://marhamilresearch4.blob.core.windows.net/gutenberg-public/Website/browse.html> You will need to install `librosa` and `soundfile` to load this dataset
TwoAbove/the-project-gutenberg-open-audiobook-collection
[ "language:en", "synthetic-dataset", "audio-dataset", "region:us" ]
2023-10-15T13:25:30+00:00
{"language": ["en"], "pretty_name": "Project Gutenberg Open Audiobook Collection", "tags": ["synthetic-dataset", "audio-dataset"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "mp3", "dtype": "audio"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*"}]}]}
2023-11-03T10:48:48+00:00
[]
[ "en" ]
TAGS #language-English #synthetic-dataset #audio-dataset #region-us
# Project Gutenberg Open Audiobook Collection Source: <URL You will need to install 'librosa' and 'soundfile' to load this dataset
[ "# Project Gutenberg Open Audiobook Collection\n\nSource: <URL\n\nYou will need to install 'librosa' and 'soundfile' to load this dataset" ]
[ "TAGS\n#language-English #synthetic-dataset #audio-dataset #region-us \n", "# Project Gutenberg Open Audiobook Collection\n\nSource: <URL\n\nYou will need to install 'librosa' and 'soundfile' to load this dataset" ]
[ 23, 31 ]
[ "passage: TAGS\n#language-English #synthetic-dataset #audio-dataset #region-us \n# Project Gutenberg Open Audiobook Collection\n\nSource: <URL\n\nYou will need to install 'librosa' and 'soundfile' to load this dataset" ]
666346d63325c8901997b5a81ff2d7b72a271da3
This dataset contains question/answer pairs from a French legal protection insurance (https://www.service-public.fr/particuliers/vosdroits/F3049?lang=en). The objective of this dataset is to contribute to open source research projects aiming to, for instance: * fine-tune LLMs on high-quality datasets, specializing them in the insurance domain * develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts * assess the knowledge of language models in the insurance field * more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry. Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: https://huggingface.co/collections/zelros/legal-protection-insurance-6536e8f389dd48faca78447e Here is an example of usages of this dataset: https://huggingface.co/spaces/zelros/The-legal-protection-insurance-comparator
zelros/pj-ce
[ "insurance", "region:us" ]
2023-10-15T13:32:52+00:00
{"tags": ["insurance"]}
2023-11-05T22:54:04+00:00
[]
[]
TAGS #insurance #region-us
This dataset contains question/answer pairs from a French legal protection insurance (URL The objective of this dataset is to contribute to open source research projects aiming to, for instance: * fine-tune LLMs on high-quality datasets, specializing them in the insurance domain * develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts * assess the knowledge of language models in the insurance field * more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry. Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: URL Here is an example of usages of this dataset: URL
[]
[ "TAGS\n#insurance #region-us \n" ]
[ 9 ]
[ "passage: TAGS\n#insurance #region-us \n" ]
6b3acc0afcaea2b20da4183c8922e37d51c833bd
# The Wikipedia Webpage 2M (WikiWeb2M) Dataset We present the WikiWeb2M dataset consisting of over 2 million English Wikipedia articles. Our released dataset includes all of the text content on each page, links to the images present, and structure metadata such as which section each text and image element comes from. This dataset is a contribution from our [paper](https://arxiv.org/abs/2305.03668) `A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding`. The dataset is stored as gzipped TFRecord files which can be downloaded here or on our [GitHub repository](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md). ## WikiWeb2M Statistics WikiWeb2M is the first multimodal open source dataset to include all page content in a unified format. Here we provide aggregate information about the WikiWeb2M dataset as well as the number of samples available with each of the fine-tuning tasks we design from it. | Number of | Train | Validation | Test | | ---- | ---- | ---- | ---- | | Pages | 1,803,225 | 100,475 | 100,833 | | Sections | 10,519,294 | 585,651 | 588,552 | | Unique Images | 3,867,277 | 284,975 | 286,390 | | Total Images | 5,340,708 | 299,057 | 300,666 | Our data processing and filtering choices for each fine-tuning task are described in the paper. | Downstream Task Samples | Train | Validation | Test | | ---- | ---- | ---- | ---- | | Page Description Generation | 1,435,263 | 80,103 | 80,339 | | Section Summarization | 3,082,031 | 172,984 | 173,591 | | Contextual Image Captioning | 2,222,814 | 124,703 | 124,188 | ## Data and Task Examples Here we illustrate how a single webpage can be processed into the three tasks we study: page description generation, section summarization, and contextual image captioning. The paper includes multiple Wikipedia article examples. ![Illustration of Succulents Wikipedia Article being used for page description generation, section summarization, and contextual image captioning](images/wikiweb2m_image.png) ## Usage ### TFRecord Features Here we provide the names of the fields included in the dataset, their tensorflow Sequence Example type, their data type, and a brief description. | Feature | Sequence Example Type | DType | Description | | ---- | ---- | ---- | ---- | | `split` | Context | string | Dataset split this page contributes to (e.g., train, val, or test) | | `page_url` | Context | string | Wikipeda page URL | | `page_title` | Context | string | Wikipedia page title, title of the article | | `raw_page_description` | Context | string | Wikipedia page description, which is typically the same or very similar to the content of the first (root) section of the article | | `clean_page_description` | Context | string | `raw_page_description` but with newline and tab characters removed; this provides the exact target text for our page description generation task | | `page_contains_images` | Context | int64 | Whether the Wikipedia page has images after our cleaning and processing steps | | `page_content_sections_without_table_list` | Context | int64 | Number of content sections with text or images that do not contain a list or table. This field can be used to reproduce data filtering for page description generation | | `is_page_description_sample` | Context | int64 | Whether a page is used as a sample for the page description fine-tuning task | | `section_title` | Sequence | string | Titles of each section on the Wikipedia page, in order | | `section_index` | Sequence | int64 | Index of each section on the Wikipedia page, in order | | `section_depth` | Sequence | int64 | Depth of each section on the Wikipedia page, in order | | `section_heading_level` | Sequence | int64 | Heading level of each section on the Wikipedia page, in order | | `section_subsection_index` | Sequence | int64 | Subsection indices, grouped by section in order | | `section_parent_index` | Sequence | int64 | The parent section index of each section, in order | | `section_text` | Sequence | string | The body text of each section, in order | | `is_section_summarization_sample` | Sequence | int64 | Whether a section is used as a sample for the section summarization fine-tuning task | | `section_raw_1st_sentence` | Sequence | string | The processed out first sentence of each section, in order | | `section_clean_1st_sentence` | Sequence | string | The same as `section_raw_1st_sentence` but with newline and tab characters removed. This provides the exact target text for our section summarization task | | `section_rest_sentence` | Sequence | string | The processed out sentences following the first sentence of each section, in order | | `section_contains_table_or_list` | Sequence | int64 | Whether section content contains a table or list; this field is needed to be able to reproduce sample filtering for section summarization | | `section_contains_images` | Sequence | int64 | Whether each section has images after our cleaning and processing steps, in order | | `is_image_caption_sample` | Sequence | int64 | Whether an image is used as a sample for the image captioning fine-tuning task | | `section_image_url` | Sequence | string | Image URLs, grouped by section in order | | `section_image_mime_type` | Sequence | string | Image mime type, grouped by section in order | | `section_image_width` | Sequence | int64 | Image width, grouped by section in order | | `section_image_height` | Sequence | int64 | Image height, grouped by section in order | | `section_image_in_wit` | Sequence | int64 | Whether an image was originally contained in the WIT dataset, grouped by section in order | | `section_image_raw_attr_desc` | Sequence | string | Image attribution description, grouped by section in order | | `section_image_clean_attr_desc` | Sequence | string | The English only processed portions of the attribution description | | `section_image_raw_ref_desc` | Sequence | string | Image reference description, grouped by section in order | | `section_image_clean_ref_desc` | Sequence | string | The same as `section_image_raw_ref_desc` but with newline and tab characters removed; this provides the exact target text for our image captioning task | | `section_image_alt_text` | Sequence | string | Image alt-text, grouped by section in order | | `section_image_captions` | Sequence | string | Comma separated concatenated text from alt-text, attribution, and reference descriptions; this is how captions are formatted as input text when used | ### Loading the Data Here we provide a small code snippet for how to load the TFRecord files. First, load any necessary packages. ```python import numpy as np import glob import tensorflow.compat.v1 as tf from collections import defaultdict ``` Next, define a data parser class. ```python class DataParser(): def __init__(self, filepath: str = 'wikiweb2m-*', path: str): self.filepath = filepath self.path = path self.data = defaultdict(list) def parse_data(self): context_feature_description = { 'split': tf.io.FixedLenFeature([], dtype=tf.string), 'page_title': tf.io.FixedLenFeature([], dtype=tf.string), 'page_url': tf.io.FixedLenFeature([], dtype=tf.string), 'clean_page_description': tf.io.FixedLenFeature([], dtype=tf.string), 'raw_page_description': tf.io.FixedLenFeature([], dtype=tf.string), 'is_page_description_sample': tf.io.FixedLenFeature([], dtype=tf.int64), 'page_contains_images': tf.io.FixedLenFeature([], dtype=tf.int64), 'page_content_sections_without_table_list': tf.io.FixedLenFeature([] , dtype=tf.int64) } sequence_feature_description = { 'is_section_summarization_sample': tf.io.VarLenFeature(dtype=tf.int64), 'section_title': tf.io.VarLenFeature(dtype=tf.string), 'section_index': tf.io.VarLenFeature(dtype=tf.int64), 'section_depth': tf.io.VarLenFeature(dtype=tf.int64), 'section_heading_level': tf.io.VarLenFeature(dtype=tf.int64), 'section_subsection_index': tf.io.VarLenFeature(dtype=tf.int64), 'section_parent_index': tf.io.VarLenFeature(dtype=tf.int64), 'section_text': tf.io.VarLenFeature(dtype=tf.string), 'section_clean_1st_sentence': tf.io.VarLenFeature(dtype=tf.string), 'section_raw_1st_sentence': tf.io.VarLenFeature(dtype=tf.string), 'section_rest_sentence': tf.io.VarLenFeature(dtype=tf.string), 'is_image_caption_sample': tf.io.VarLenFeature(dtype=tf.int64), 'section_image_url': tf.io.VarLenFeature(dtype=tf.string), 'section_image_mime_type': tf.io.VarLenFeature(dtype=tf.string), 'section_image_width': tf.io.VarLenFeature(dtype=tf.int64), 'section_image_height': tf.io.VarLenFeature(dtype=tf.int64), 'section_image_in_wit': tf.io.VarLenFeature(dtype=tf.int64), 'section_contains_table_or_list': tf.io.VarLenFeature(dtype=tf.int64), 'section_image_captions': tf.io.VarLenFeature(dtype=tf.string), 'section_image_alt_text': tf.io.VarLenFeature(dtype=tf.string), 'section_image_raw_attr_desc': tf.io.VarLenFeature(dtype=tf.string), 'section_image_clean_attr_desc': tf.io.VarLenFeature(dtype=tf.string), 'section_image_raw_ref_desc': tf.io.VarLenFeature(dtype=tf.string), 'section_image_clean_ref_desc': tf.io.VarLenFeature(dtype=tf.string), 'section_contains_images': tf.io.VarLenFeature(dtype=tf.int64) } def _parse_function(example_proto): return tf.io.parse_single_sequence_example(example_proto, context_feature_description, sequence_feature_description) suffix = '.tfrecord*' data_path = glob.Glob(self.path + self.filepath + suffix) raw_dataset = tf.data.TFRecordDataset(data_path, compression_type='GZIP') parsed_dataset = raw_dataset.map(_parse_function) for d in parsed_dataset: split = d[0]['split'].numpy().decode() self.data[split].append(d) ``` Then you can run the following to parse the dataset. ```python parser = DataParser() parser.parse_data() print((len(parser.data['train']), len(parser.data['val']), len(parser.data['test']))) ``` ### Models Our full attention, transient global, and prefix global experiments were run using the [LongT5](https://github.com/google-research/longt5) code base. ## How to Cite If you extend or use this work, please cite the [paper](https://arxiv.org/abs/2305.03668) where it was introduced: ``` @inproceedings{ burns2023wiki, title={A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding}, author={Andrea Burns and Krishna Srinivasan and Joshua Ainslie and Geoff Brown and Bryan A. Plummer and Kate Saenko and Jianmo Ni and Mandy Guo}, booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year={2023}, url={https://openreview.net/forum?id=rwcLHjtUmn} } ```
aburns4/WikiWeb2M
[ "license:cc-by-sa-3.0", "arxiv:2305.03668", "region:us" ]
2023-10-15T13:45:20+00:00
{"license": "cc-by-sa-3.0"}
2023-10-15T15:48:48+00:00
[ "2305.03668" ]
[]
TAGS #license-cc-by-sa-3.0 #arxiv-2305.03668 #region-us
The Wikipedia Webpage 2M (WikiWeb2M) Dataset ============================================ We present the WikiWeb2M dataset consisting of over 2 million English Wikipedia articles. Our released dataset includes all of the text content on each page, links to the images present, and structure metadata such as which section each text and image element comes from. This dataset is a contribution from our paper 'A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding'. The dataset is stored as gzipped TFRecord files which can be downloaded here or on our GitHub repository. WikiWeb2M Statistics -------------------- WikiWeb2M is the first multimodal open source dataset to include all page content in a unified format. Here we provide aggregate information about the WikiWeb2M dataset as well as the number of samples available with each of the fine-tuning tasks we design from it. Our data processing and filtering choices for each fine-tuning task are described in the paper. Data and Task Examples ---------------------- Here we illustrate how a single webpage can be processed into the three tasks we study: page description generation, section summarization, and contextual image captioning. The paper includes multiple Wikipedia article examples. !Illustration of Succulents Wikipedia Article being used for page description generation, section summarization, and contextual image captioning Usage ----- ### TFRecord Features Here we provide the names of the fields included in the dataset, their tensorflow Sequence Example type, their data type, and a brief description. ### Loading the Data Here we provide a small code snippet for how to load the TFRecord files. First, load any necessary packages. Next, define a data parser class. Then you can run the following to parse the dataset. ### Models Our full attention, transient global, and prefix global experiments were run using the LongT5 code base. How to Cite ----------- If you extend or use this work, please cite the paper where it was introduced:
[ "### TFRecord Features\n\n\nHere we provide the names of the fields included in the dataset, their\ntensorflow Sequence Example type, their data type, and a brief description.", "### Loading the Data\n\n\nHere we provide a small code snippet for how to load the TFRecord files. First,\nload any necessary packages.\n\n\nNext, define a data parser class.\n\n\nThen you can run the following to parse the dataset.", "### Models\n\n\nOur full attention, transient global, and prefix global experiments were run\nusing the LongT5 code base.\n\n\nHow to Cite\n-----------\n\n\nIf you extend or use this work, please cite the paper where it was\nintroduced:" ]
[ "TAGS\n#license-cc-by-sa-3.0 #arxiv-2305.03668 #region-us \n", "### TFRecord Features\n\n\nHere we provide the names of the fields included in the dataset, their\ntensorflow Sequence Example type, their data type, and a brief description.", "### Loading the Data\n\n\nHere we provide a small code snippet for how to load the TFRecord files. First,\nload any necessary packages.\n\n\nNext, define a data parser class.\n\n\nThen you can run the following to parse the dataset.", "### Models\n\n\nOur full attention, transient global, and prefix global experiments were run\nusing the LongT5 code base.\n\n\nHow to Cite\n-----------\n\n\nIf you extend or use this work, please cite the paper where it was\nintroduced:" ]
[ 26, 43, 55, 51 ]
[ "passage: TAGS\n#license-cc-by-sa-3.0 #arxiv-2305.03668 #region-us \n### TFRecord Features\n\n\nHere we provide the names of the fields included in the dataset, their\ntensorflow Sequence Example type, their data type, and a brief description.### Loading the Data\n\n\nHere we provide a small code snippet for how to load the TFRecord files. First,\nload any necessary packages.\n\n\nNext, define a data parser class.\n\n\nThen you can run the following to parse the dataset.### Models\n\n\nOur full attention, transient global, and prefix global experiments were run\nusing the LongT5 code base.\n\n\nHow to Cite\n-----------\n\n\nIf you extend or use this work, please cite the paper where it was\nintroduced:" ]
96797e5f882c223b7f023ff721b22971c6cba445
# Dataset Card for "pt-squad-generate-question" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
godoyj/pt-squad-generate-question
[ "region:us" ]
2023-10-15T13:52:51+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "dtype": "string"}, {"name": "questions", "struct": [{"name": "text", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 83980330, "num_examples": 87510}, {"name": "validation", "num_bytes": 10423399, "num_examples": 10570}], "download_size": 26140275, "dataset_size": 94403729}}
2023-10-15T13:53:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pt-squad-generate-question" More Information needed
[ "# Dataset Card for \"pt-squad-generate-question\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pt-squad-generate-question\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pt-squad-generate-question\"\n\nMore Information needed" ]
4b93ae7cc35b00b0388124d2249f56767cfb05e7
# Dataset Card for "pt-squad-generate-answer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
godoyj/pt-squad-generate-answer
[ "region:us" ]
2023-10-15T13:53:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 78166150, "num_examples": 87510}, {"name": "validation", "num_bytes": 9717596, "num_examples": 10570}], "download_size": 19115754, "dataset_size": 87883746}}
2023-10-15T13:54:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pt-squad-generate-answer" More Information needed
[ "# Dataset Card for \"pt-squad-generate-answer\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pt-squad-generate-answer\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pt-squad-generate-answer\"\n\nMore Information needed" ]
0d99463084d003515e9216e91d2a6f9f8d2dd6de
# Dataset Card for "russia-ukraine-cnn" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
orgcatorg/russia-ukraine-cnn
[ "region:us" ]
2023-10-15T13:55:21+00:00
{"dataset_info": {"features": [{"name": "@type", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dateModified", "dtype": "string"}, {"name": "datePublished", "dtype": "string"}, {"name": "mainEntityOfPage", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "articleBody", "dtype": "string"}, {"name": "image", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41401329, "num_examples": 19759}], "download_size": 17332574, "dataset_size": 41401329}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-15T21:22:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "russia-ukraine-cnn" More Information needed
[ "# Dataset Card for \"russia-ukraine-cnn\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"russia-ukraine-cnn\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"russia-ukraine-cnn\"\n\nMore Information needed" ]
5b90935fac6dcc1312fa8755851716a0f4c2d55e
# Dataset Card for "plant-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yusuf802/plant-images
[ "region:us" ]
2023-10-15T14:14:30+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Apple_Black_rot", "1": "Apple_Cedar_apple_rust", "2": "Apple_Powdery_mildew", "3": "Apple_healthy", "4": "Apple_scab", "5": "Cherry_(including_sour)_Powdery_mildew", "6": "Cherry_(including_sour)_healthy", "7": "Corn_(maize)_Cercospora_leaf_spot Gray_leaf_spot", "8": "Corn_(maize)_Common_rust", "9": "Corn_(maize)_Northern_Leaf_Blight", "10": "Corn_(maize)_healthy", "11": "Cotton_leaf_diseased", "12": "Cotton_leaf_fresh", "13": "Grape_Black_rot", "14": "Grape___Esca_(Black_Measles)", "15": "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)", "16": "Grape___healthy", "17": "Orange_Haunglongbing_(Citrus_greening)", "18": "Orange__Black_Rot", "19": "Orange__Canker", "20": "Orange__Healthy", "21": "Peach_Bacterial_spot", "22": "Peach_healthy", "23": "Pepper,_bell_Bacterial_spot", "24": "Pepper,_bell_healthy", "25": "Potato_Early_blight", "26": "Potato_Late_blight", "27": "Potato_healthy", "28": "Squash_Powdery_mildew", "29": "Strawberry_Leaf_scorch", "30": "Strawberry_healthy", "31": "Tomato_Bacterial_spot", "32": "Tomato_Early_blight", "33": "Tomato_Late_blight", "34": "Tomato_Leaf_Mold", "35": "Tomato_Septoria_leaf_spot", "36": "Tomato_Spider_mites_Two_spotted_spider_mite", "37": "Tomato_Target_Spot", "38": "Tomato_Tomato_Yellow_Leaf_Curl_Virus", "39": "Tomato_Tomato_mosaic_virus", "40": "Tomato_healthy", "41": "Wheat_healthy", "42": "Wheat_leaf_rust", "43": "Wheat_nitrogen_deficiency"}}}}], "splits": [{"name": "train", "num_bytes": 5783472143.757347, "num_examples": 56842}, {"name": "test", "num_bytes": 929558444.8586537, "num_examples": 10032}], "download_size": 6448751314, "dataset_size": 6713030588.616001}}
2023-10-15T14:19:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for "plant-images" More Information needed
[ "# Dataset Card for \"plant-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"plant-images\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"plant-images\"\n\nMore Information needed" ]
cf7b07f7816bd7cfe2fe573de2593206beb2d80e
# Dataset Card for "synpre_set_1M_token_3000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/synpre_set_1M_token_3000
[ "region:us" ]
2023-10-15T14:16:03+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1201540790, "num_examples": 1000000}, {"name": "validation", "num_bytes": 12049203, "num_examples": 10000}], "download_size": 733055816, "dataset_size": 1213589993}}
2023-10-15T14:18:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "synpre_set_1M_token_3000" More Information needed
[ "# Dataset Card for \"synpre_set_1M_token_3000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"synpre_set_1M_token_3000\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"synpre_set_1M_token_3000\"\n\nMore Information needed" ]
7f23e32c893534e950f1ddeb875b6177ee73f868
# Dataset Card for "israel-hamas-gaza-cnbc" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
orgcatorg/israel-hamas-gaza-cnbc
[ "region:us" ]
2023-10-15T14:32:36+00:00
{"dataset_info": {"features": [{"name": "@type", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "dateModified", "dtype": "string"}, {"name": "datePublished", "dtype": "string"}, {"name": "mainEntityOfPage", "dtype": "string"}, {"name": "articleBody", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "thumbnailUrl", "dtype": "string"}, {"name": "video", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 668826, "num_examples": 335}], "download_size": 0, "dataset_size": 668826}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T19:12:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "israel-hamas-gaza-cnbc" More Information needed
[ "# Dataset Card for \"israel-hamas-gaza-cnbc\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"israel-hamas-gaza-cnbc\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"israel-hamas-gaza-cnbc\"\n\nMore Information needed" ]
dfcdad6c282afc0c612e8112b6c1c3a1c3861b47
# Dataset Card for "qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ostapeno/qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl
[ "region:us" ]
2023-10-15T14:38:04+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "docno", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "icl_examples", "sequence": "string"}, {"name": "author_instr", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "author_response", "dtype": "string"}, {"name": "normalized_cumul_logprob_response", "dtype": "float64"}], "splits": [{"name": "formal_logic", "num_bytes": 16064538.691369945, "num_examples": 5673}, {"name": "machine_learning", "num_bytes": 20632157.395614564, "num_examples": 7286}, {"name": "global_facts", "num_bytes": 22234929.984952725, "num_examples": 7852}, {"name": "abstract_algebra", "num_bytes": 24030261.82530678, "num_examples": 8486}, {"name": "high_school_physics", "num_bytes": 22147145.62051901, "num_examples": 7821}, {"name": "college_biology", "num_bytes": 20867192.95200161, "num_examples": 7369}, {"name": "high_school_government_and_politics", "num_bytes": 21133377.798994165, "num_examples": 7463}, {"name": "prehistory", "num_bytes": 22368022.408449005, "num_examples": 7899}, {"name": "security_studies", "num_bytes": 19454147.85998793, "num_examples": 6870}, {"name": "sociology", "num_bytes": 22217939.462804265, "num_examples": 7846}], "download_size": 42555653, "dataset_size": 211149713.99999994}}
2023-10-15T14:38:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl" More Information needed
[ "# Dataset Card for \"qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl\"\n\nMore Information needed" ]
[ 6, 36 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl\"\n\nMore Information needed" ]
d01393608ba24407d407af2006caec8de17d72e5
Filtered ostapeno/qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl to match per subject length from sordonia/qa-platy_icl0_clen128_maxD-1_maxC5000_0
ostapeno/qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl_length_matched
[ "region:us" ]
2023-10-15T14:47:44+00:00
{}
2023-10-15T14:48:56+00:00
[]
[]
TAGS #region-us
Filtered ostapeno/qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl to match per subject length from sordonia/qa-platy_icl0_clen128_maxD-1_maxC5000_0
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
2403f042b3d7b6f4a544b8cf7171b24e40f78ed9
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3632035.svg)](https://doi.org/10.5281/zenodo.3632035) # BACH Dataset : Grand Challenge on Breast Cancer Histology images **Homepage**: https://zenodo.org/records/3632035 \ **Homepage**: https://iciar2018-challenge.grand-challenge.org/ \ **Publication Date**: 2019-05-31 \ **License**: [Creative Commons Attribution Non Commercial No Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) \ **Citation**: ```bibtex @dataset{polonia_2020_3632035, author = {Polónia, António and Eloy, Catarina and Aguiar, Paulo}, title = {{BACH Dataset : Grand Challenge on Breast Cancer Histology images}}, month = jan, year = 2020, publisher = {Zenodo} } ``` ## Description The dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images. Microscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image. The annotation was performed by two medical experts and images where there was disagreement were discarded. Images have the following specifications: * Color model: R(ed)G(reen)B(lue) * Size: 2048 x 1536 pixels * Pixel scale: 0.42 µm x 0.42 µm * Memory space: 10-20 MB (approx.) * Type of label: image-wise
1aurent/BACH
[ "task_categories:image-classification", "size_categories:n<1K", "license:cc-by-nc-nd-4.0", "biology", "Histopathology", "Histology", "Digital Pathology", "Breast Cancer", "region:us" ]
2023-10-15T14:53:43+00:00
{"license": "cc-by-nc-nd-4.0", "size_categories": ["n<1K"], "task_categories": ["image-classification"], "tags": ["biology", "Histopathology", "Histology", "Digital Pathology", "Breast Cancer"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Benign", "1": "InSitu", "2": "Invasive", "3": "Normal", "4": "Unknown"}}}}], "splits": [{"name": "train", "num_bytes": 7370596186.0, "num_examples": 400}, {"name": "test", "num_bytes": 1887476013.0, "num_examples": 100}], "download_size": 7727410763, "dataset_size": 9258072199.0}}
2023-10-15T16:07:11+00:00
[]
[]
TAGS #task_categories-image-classification #size_categories-n<1K #license-cc-by-nc-nd-4.0 #biology #Histopathology #Histology #Digital Pathology #Breast Cancer #region-us
![DOI](URL # BACH Dataset : Grand Challenge on Breast Cancer Histology images Homepage: URL \ Homepage: URL \ Publication Date: 2019-05-31 \ License: Creative Commons Attribution Non Commercial No Derivatives 4.0 International \ Citation: ## Description The dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images. Microscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image. The annotation was performed by two medical experts and images where there was disagreement were discarded. Images have the following specifications: * Color model: R(ed)G(reen)B(lue) * Size: 2048 x 1536 pixels * Pixel scale: 0.42 µm x 0.42 µm * Memory space: 10-20 MB (approx.) * Type of label: image-wise
[ "# BACH Dataset : Grand Challenge on Breast Cancer Histology images\n\nHomepage: URL \\\nHomepage: URL \\\nPublication Date: 2019-05-31 \\\nLicense: Creative Commons Attribution Non Commercial No Derivatives 4.0 International \\\nCitation:", "## Description\n\nThe dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images.\n\nMicroscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image.\nThe annotation was performed by two medical experts and images where there was disagreement were discarded.\nImages have the following specifications:\n * Color model: R(ed)G(reen)B(lue)\n * Size: 2048 x 1536 pixels\n * Pixel scale: 0.42 µm x 0.42 µm\n * Memory space: 10-20 MB (approx.)\n * Type of label: image-wise" ]
[ "TAGS\n#task_categories-image-classification #size_categories-n<1K #license-cc-by-nc-nd-4.0 #biology #Histopathology #Histology #Digital Pathology #Breast Cancer #region-us \n", "# BACH Dataset : Grand Challenge on Breast Cancer Histology images\n\nHomepage: URL \\\nHomepage: URL \\\nPublication Date: 2019-05-31 \\\nLicense: Creative Commons Attribution Non Commercial No Derivatives 4.0 International \\\nCitation:", "## Description\n\nThe dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images.\n\nMicroscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image.\nThe annotation was performed by two medical experts and images where there was disagreement were discarded.\nImages have the following specifications:\n * Color model: R(ed)G(reen)B(lue)\n * Size: 2048 x 1536 pixels\n * Pixel scale: 0.42 µm x 0.42 µm\n * Memory space: 10-20 MB (approx.)\n * Type of label: image-wise" ]
[ 61, 54, 157 ]
[ "passage: TAGS\n#task_categories-image-classification #size_categories-n<1K #license-cc-by-nc-nd-4.0 #biology #Histopathology #Histology #Digital Pathology #Breast Cancer #region-us \n# BACH Dataset : Grand Challenge on Breast Cancer Histology images\n\nHomepage: URL \\\nHomepage: URL \\\nPublication Date: 2019-05-31 \\\nLicense: Creative Commons Attribution Non Commercial No Derivatives 4.0 International \\\nCitation:## Description\n\nThe dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images.\n\nMicroscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image.\nThe annotation was performed by two medical experts and images where there was disagreement were discarded.\nImages have the following specifications:\n * Color model: R(ed)G(reen)B(lue)\n * Size: 2048 x 1536 pixels\n * Pixel scale: 0.42 µm x 0.42 µm\n * Memory space: 10-20 MB (approx.)\n * Type of label: image-wise" ]
b1fe3d1173e333e39f66ee25b93f7ca54c103482
# Dataset Card for "synpre_set_1M_token_1000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/synpre_set_1M_token_1000
[ "region:us" ]
2023-10-15T15:09:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1193269554, "num_examples": 1000000}, {"name": "validation", "num_bytes": 11956441, "num_examples": 10000}], "download_size": 598117661, "dataset_size": 1205225995}}
2023-10-15T15:11:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "synpre_set_1M_token_1000" More Information needed
[ "# Dataset Card for \"synpre_set_1M_token_1000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"synpre_set_1M_token_1000\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"synpre_set_1M_token_1000\"\n\nMore Information needed" ]
beeb73cfccf0037c621f84bbb8077f2723be5458
# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TehVenom/Dolly_Shygmalion-6b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TehVenom/Dolly_Shygmalion-6b](https://huggingface.co/TehVenom/Dolly_Shygmalion-6b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T16:26:35.787063](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b/blob/main/results_2023-10-15T16-26-35.787063.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0008389261744966443, "em_stderr": 0.0002964962989801232, "f1": 0.049329907718121055, "f1_stderr": 0.001207499751606471, "acc": 0.33737021840348064, "acc_stderr": 0.008672111270767138 }, "harness|drop|3": { "em": 0.0008389261744966443, "em_stderr": 0.0002964962989801232, "f1": 0.049329907718121055, "f1_stderr": 0.001207499751606471 }, "harness|gsm8k|5": { "acc": 0.02122820318423048, "acc_stderr": 0.003970449129848635 }, "harness|winogrande|5": { "acc": 0.6535122336227308, "acc_stderr": 0.01337377341168564 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b
[ "region:us" ]
2023-10-15T15:26:39+00:00
{"pretty_name": "Evaluation run of TehVenom/Dolly_Shygmalion-6b", "dataset_summary": "Dataset automatically created during the evaluation run of model [TehVenom/Dolly_Shygmalion-6b](https://huggingface.co/TehVenom/Dolly_Shygmalion-6b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T16:26:35.787063](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b/blob/main/results_2023-10-15T16-26-35.787063.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801232,\n \"f1\": 0.049329907718121055,\n \"f1_stderr\": 0.001207499751606471,\n \"acc\": 0.33737021840348064,\n \"acc_stderr\": 0.008672111270767138\n },\n \"harness|drop|3\": {\n \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801232,\n \"f1\": 0.049329907718121055,\n \"f1_stderr\": 0.001207499751606471\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02122820318423048,\n \"acc_stderr\": 0.003970449129848635\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6535122336227308,\n \"acc_stderr\": 0.01337377341168564\n }\n}\n```", "repo_url": "https://huggingface.co/TehVenom/Dolly_Shygmalion-6b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T16_26_35.787063", "path": ["**/details_harness|drop|3_2023-10-15T16-26-35.787063.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T16-26-35.787063.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T16_26_35.787063", "path": ["**/details_harness|gsm8k|5_2023-10-15T16-26-35.787063.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T16-26-35.787063.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T16_26_35.787063", "path": ["**/details_harness|winogrande|5_2023-10-15T16-26-35.787063.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T16-26-35.787063.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T16_26_35.787063", "path": ["results_2023-10-15T16-26-35.787063.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T16-26-35.787063.parquet"]}]}]}
2023-10-15T15:26:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TehVenom/Dolly_Shygmalion-6b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T16:26:35.787063(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TehVenom/Dolly_Shygmalion-6b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T16:26:35.787063(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TehVenom/Dolly_Shygmalion-6b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T16:26:35.787063(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TehVenom/Dolly_Shygmalion-6b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T16:26:35.787063(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
443caf3ab048bb4ed5fd580cf54085b01c5ec4e3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Crepot/distilbert-base-uncased-finetuned-squad * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@emmermarcell](https://huggingface.co/emmermarcell) for evaluating this model.
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-1f754a-95278146333
[ "autotrain", "evaluation", "region:us" ]
2023-10-15T15:50:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "Crepot/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2023-10-15T15:51:38+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: Crepot/distilbert-base-uncased-finetuned-squad * Dataset: adversarial_qa * Config: adversarialQA * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @emmermarcell for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Crepot/distilbert-base-uncased-finetuned-squad\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @emmermarcell for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Crepot/distilbert-base-uncased-finetuned-squad\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @emmermarcell for evaluating this model." ]
[ 13, 99, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Crepot/distilbert-base-uncased-finetuned-squad\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @emmermarcell for evaluating this model." ]
21a4c1f7ca0e538f4bf2be0aa92e385d7cd041c0
# Dataset Card for "leaf-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yusuf802/leaf-images
[ "region:us" ]
2023-10-15T15:59:42+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Apple_Black_rot", "1": "Apple_Cedar_apple_rust", "2": "Apple_Powdery_mildew", "3": "Apple_healthy", "4": "Apple_scab", "5": "Cherry_(including_sour)_Powdery_mildew", "6": "Cherry_(including_sour)_healthy", "7": "Corn_(maize)_Cercospora_leaf_spot Gray_leaf_spot", "8": "Corn_(maize)_Common_rust", "9": "Corn_(maize)_Northern_Leaf_Blight", "10": "Corn_(maize)_healthy", "11": "Cotton_leaf_diseased", "12": "Cotton_leaf_fresh", "13": "Grape_Black_rot", "14": "Grape___Esca_(Black_Measles)", "15": "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)", "16": "Grape___healthy", "17": "Orange_Haunglongbing_(Citrus_greening)", "18": "Orange__Black_Rot", "19": "Orange__Canker", "20": "Orange__Healthy", "21": "Peach_Bacterial_spot", "22": "Peach_healthy", "23": "Pepper,_bell_Bacterial_spot", "24": "Pepper,_bell_healthy", "25": "Potato_Early_blight", "26": "Potato_Late_blight", "27": "Potato_healthy", "28": "Squash_Powdery_mildew", "29": "Strawberry_Leaf_scorch", "30": "Strawberry_healthy", "31": "Tomato_Bacterial_spot", "32": "Tomato_Early_blight", "33": "Tomato_Late_blight", "34": "Tomato_Leaf_Mold", "35": "Tomato_Septoria_leaf_spot", "36": "Tomato_Spider_mites_Two_spotted_spider_mite", "37": "Tomato_Target_Spot", "38": "Tomato_Tomato_Yellow_Leaf_Curl_Virus", "39": "Tomato_Tomato_mosaic_virus", "40": "Tomato_healthy", "41": "Wheat_healthy", "42": "Wheat_leaf_rust", "43": "Wheat_nitrogen_deficiency"}}}}], "splits": [{"name": "train", "num_bytes": 7355420032.737346, "num_examples": 56842}, {"name": "test", "num_bytes": 1331846480.2826538, "num_examples": 10032}], "download_size": 8653117062, "dataset_size": 8687266513.02}}
2023-10-15T16:16:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "leaf-images" More Information needed
[ "# Dataset Card for \"leaf-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"leaf-images\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"leaf-images\"\n\nMore Information needed" ]
9d49f06c4096983e6508aa7f59d4d41396418183
# Dataset Card for "tamil-alpaca" This repository includes a Tamil-translated version of the [Alpaca dataset](https://huggingface.co/datasets/yahma/alpaca-cleaned). This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact. **GitHub Repository:** [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama) ## Models trained using this dataset | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama datasets in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
abhinand/tamil-alpaca
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:ta", "license:gpl-3.0", "arxiv:2311.05845", "region:us" ]
2023-10-15T16:03:16+00:00
{"language": ["ta"], "license": "gpl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "tamil-alpaca", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 287556653, "num_examples": 51876}], "download_size": 0, "dataset_size": 287556653}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-13T02:26:54+00:00
[ "2311.05845" ]
[ "ta" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us
Dataset Card for "tamil-alpaca" =============================== This repository includes a Tamil-translated version of the Alpaca dataset. This dataset is part of the release of Tamil LLaMA family of models – an important step in advancing LLMs for the Tamil language. To dive deep into the development and capabilities of this model, please read the research paper and the introductory blog post (WIP) that outlines our journey and the model's potential impact. GitHub Repository: URL Models trained using this dataset --------------------------------- Meet the Developers ------------------- Get to know the creators behind this innovative model and follow their contributions to the field: * Abhinand Balachandran If you use this model or any of the the Tamil-Llama datasets in your research, please cite:
[]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us \n" ]
[ 50 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Tamil #license-gpl-3.0 #arxiv-2311.05845 #region-us \n" ]
8dddede5ea3f318b2ca54fbb47b991ccb773b76f
# Dataset Card for "healthsearchqa_answers" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
katielink/healthsearchqa_answers
[ "region:us" ]
2023-10-15T16:14:06+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "gpt-3.5-turbo_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 182952, "num_examples": 140}], "download_size": 102812, "dataset_size": 182952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-15T16:14:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "healthsearchqa_answers" More Information needed
[ "# Dataset Card for \"healthsearchqa_answers\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"healthsearchqa_answers\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"healthsearchqa_answers\"\n\nMore Information needed" ]
8433073a42b922bdbc0159e82079f2a7a3ede4f1
# Dataset Card for "isaid_sam_predicted" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
giuseppemartino/isaid_sam_predicted
[ "region:us" ]
2023-10-15T16:16:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6757890501.0, "num_examples": 899}, {"name": "validation", "num_bytes": 152972321.0, "num_examples": 17}], "download_size": 152947587, "dataset_size": 6910862822.0}}
2023-10-15T20:05:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "isaid_sam_predicted" More Information needed
[ "# Dataset Card for \"isaid_sam_predicted\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"isaid_sam_predicted\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"isaid_sam_predicted\"\n\nMore Information needed" ]
82b5d7fa97adaace2a2a825eabdf01052bc5c7f6
# Dataset Card for Research Publications (Alpaca Format) This dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023. ## Dataset Details ### Dataset Description This dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023. - **Curated by:** Jack W. - **Funded by:** Self - **Language(s) (NLP):** English - **License:** MIT ## Uses ### Direct Use This dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights. ### Out-of-Scope Use The dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered. ## Dataset Structure The dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: `instruction`, `input`, and `output`. ## Dataset Creation ### Curation Rationale The dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics. ### Source Data #### Data Collection and Processing The data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API. Thank you to arXiv for use of its open access interoperability. #### Who are the source data producers? Research institutions and researchers produce articles in the specified domains. ### Annotations Annotations were not provided as part of this dataset. ## Bias, Risks, and Limitations The dataset may have biases inherent to the selection and summarization of research articles. It might not cover all research in the specified domains or time frame. ### Recommendations Users should be aware of potential biases and ensure they use the dataset in contexts relevant to the research domains covered. ## Citation **Arxiv:** https://arxiv.org/ ## Glossary - **Alpaca Format:** A data structure format suitable for Llama2 finetuning. - **Llama2 (LoRa):** Reference to the machine learning model or platform being used. ## More Information https://github.com/j-webtek ## Dataset Card Authors Jack W. ## Dataset Card Contact **TBD**
jackboi/research_assist_2022_2023
[ "task_categories:text-generation", "task_categories:feature-extraction", "size_categories:10K<n<100K", "language:en", "license:mit", "region:us" ]
2023-10-15T16:19:45+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "feature-extraction"]}
2023-10-15T17:36:05+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #region-us
# Dataset Card for Research Publications (Alpaca Format) This dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023. ## Dataset Details ### Dataset Description This dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023. - Curated by: Jack W. - Funded by: Self - Language(s) (NLP): English - License: MIT ## Uses ### Direct Use This dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights. ### Out-of-Scope Use The dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered. ## Dataset Structure The dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: 'instruction', 'input', and 'output'. ## Dataset Creation ### Curation Rationale The dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics. ### Source Data #### Data Collection and Processing The data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API. Thank you to arXiv for use of its open access interoperability. #### Who are the source data producers? Research institutions and researchers produce articles in the specified domains. ### Annotations Annotations were not provided as part of this dataset. ## Bias, Risks, and Limitations The dataset may have biases inherent to the selection and summarization of research articles. It might not cover all research in the specified domains or time frame. ### Recommendations Users should be aware of potential biases and ensure they use the dataset in contexts relevant to the research domains covered. Arxiv: URL ## Glossary - Alpaca Format: A data structure format suitable for Llama2 finetuning. - Llama2 (LoRa): Reference to the machine learning model or platform being used. ## More Information URL ## Dataset Card Authors Jack W. ## Dataset Card Contact TBD
[ "# Dataset Card for Research Publications (Alpaca Format)\n\nThis dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023.", "## Dataset Details", "### Dataset Description\n\nThis dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023.\n\n- Curated by: Jack W.\n- Funded by: Self\n- Language(s) (NLP): English\n- License: MIT", "## Uses", "### Direct Use\n\nThis dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights.", "### Out-of-Scope Use\n\nThe dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered.", "## Dataset Structure\n\nThe dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: 'instruction', 'input', and 'output'.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics.", "### Source Data", "#### Data Collection and Processing\n\nThe data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API.\n\nThank you to arXiv for use of its open access interoperability.", "#### Who are the source data producers?\n\nResearch institutions and researchers produce articles in the specified domains.", "### Annotations\n\nAnnotations were not provided as part of this dataset.", "## Bias, Risks, and Limitations\n\nThe dataset may have biases inherent to the selection and summarization of research articles. It might not cover all research in the specified domains or time frame.", "### Recommendations\n\nUsers should be aware of potential biases and ensure they use the dataset in contexts relevant to the research domains covered.\n\nArxiv: URL", "## Glossary\n\n- Alpaca Format: A data structure format suitable for Llama2 finetuning.\n- Llama2 (LoRa): Reference to the machine learning model or platform being used.", "## More Information\n\nURL", "## Dataset Card Authors\n\nJack W.", "## Dataset Card Contact\n\nTBD" ]
[ "TAGS\n#task_categories-text-generation #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #region-us \n", "# Dataset Card for Research Publications (Alpaca Format)\n\nThis dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023.", "## Dataset Details", "### Dataset Description\n\nThis dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023.\n\n- Curated by: Jack W.\n- Funded by: Self\n- Language(s) (NLP): English\n- License: MIT", "## Uses", "### Direct Use\n\nThis dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights.", "### Out-of-Scope Use\n\nThe dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered.", "## Dataset Structure\n\nThe dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: 'instruction', 'input', and 'output'.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics.", "### Source Data", "#### Data Collection and Processing\n\nThe data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API.\n\nThank you to arXiv for use of its open access interoperability.", "#### Who are the source data producers?\n\nResearch institutions and researchers produce articles in the specified domains.", "### Annotations\n\nAnnotations were not provided as part of this dataset.", "## Bias, Risks, and Limitations\n\nThe dataset may have biases inherent to the selection and summarization of research articles. It might not cover all research in the specified domains or time frame.", "### Recommendations\n\nUsers should be aware of potential biases and ensure they use the dataset in contexts relevant to the research domains covered.\n\nArxiv: URL", "## Glossary\n\n- Alpaca Format: A data structure format suitable for Llama2 finetuning.\n- Llama2 (LoRa): Reference to the machine learning model or platform being used.", "## More Information\n\nURL", "## Dataset Card Authors\n\nJack W.", "## Dataset Card Contact\n\nTBD" ]
[ 50, 83, 4, 93, 3, 61, 33, 52, 5, 50, 4, 66, 25, 18, 46, 39, 41, 4, 9, 7 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-mit #region-us \n# Dataset Card for Research Publications (Alpaca Format)\n\nThis dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023.## Dataset Details### Dataset Description\n\nThis dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023.\n\n- Curated by: Jack W.\n- Funded by: Self\n- Language(s) (NLP): English\n- License: MIT## Uses### Direct Use\n\nThis dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights.### Out-of-Scope Use\n\nThe dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered.## Dataset Structure\n\nThe dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: 'instruction', 'input', and 'output'.## Dataset Creation### Curation Rationale\n\nThe dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics.### Source Data#### Data Collection and Processing\n\nThe data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API.\n\nThank you to arXiv for use of its open access interoperability." ]
2b6ad69445fbb5695c9b165475e8decdbeb97747
# Dataset Card for MathVista - [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-description) - [Paper Information](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#paper-information) - [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-examples) - [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#leaderboard) - [Dataset Usage](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#dataset-usage) - [Data Downloading](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-downloading) - [Data Format](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-format) - [Data Visualization](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-visualization) - [Data Source](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#data-source) - [Automatic Evaluation](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#automatic-evaluation) - [License](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#license) - [Citation](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/README.md#citation) ## Dataset Description **MathVista** is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of **three newly created datasets, IQTest, FunctionQA, and PaperQA**, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates **9 MathQA datasets** and **19 VQA datasets** from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, **MathVista** includes **6,141 examples** collected from **31 different datasets**. ## Paper Information - Paper: https://arxiv.org/abs/2310.02255 - Code: https://github.com/lupantech/MathVista - Project: https://mathvista.github.io/ - Visualization: https://mathvista.github.io/#visualization - Leaderboard: https://mathvista.github.io/#leaderboard ## Dataset Examples Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA: <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/our_new_3_datasets.png" style="zoom:40%;" /> <details> <summary>🔍 Click to expand/collapse more examples</summary> Examples of seven mathematical reasoning skills: 1. Arithmetic Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/ari.png" style="zoom:40%;" /> 2. Statistical Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/sta.png" style="zoom:40%;" /> 3. Algebraic Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/alg.png" style="zoom:40%;" /> 4. Geometry Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/geo.png" style="zoom:40%;" /> 5. Numeric common sense <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/num.png" style="zoom:40%;" /> 6. Scientific Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/sci.png" style="zoom:40%;" /> 7. Logical Reasoning <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/skills/log.png" style="zoom:40%;" /> </details> ## Leaderboard 🏆 The leaderboard for the *testmini* set (1,000 examples) is available [here](https://mathvista.github.io/#leaderboard). 🏆 The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on [CodaLab](https://codalab.org/) are under construction. ## Dataset Usage ### Data Downloading All the data examples were divided into two subsets: *testmini* and *test*. - **testmini**: 1,000 examples used for model development, validation, or for those with limited computing resources. - **test**: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released. You can download this dataset by the following command (make sure that you have installed [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart)): ```python from datasets import load_dataset dataset = load_dataset("AI4Math/MathVista") ``` Here are some examples of how to access the downloaded dataset: ```python # print the first example on the testmini set print(dataset["testmini"][0]) print(dataset["testmini"][0]['pid']) # print the problem id print(dataset["testmini"][0]['question']) # print the question text print(dataset["testmini"][0]['query']) # print the query text print(dataset["testmini"][0]['image']) # print the image path print(dataset["testmini"][0]['answer']) # print the answer dataset["testmini"][0]['decoded_image'] # display the image # print the first example on the test set print(dataset["test"][0]) ``` ### Data Format The dataset is provided in json format and contains the following attributes: ```json { "question": [string] The question text, "image": [string] A file path pointing to the associated image, "choices": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value, "unit": [string] The unit associated with the answer, e.g., "m^2", "years". If no unit is relevant, it can be a 'none' value, "precision": [integer] The number of decimal places the answer should be rounded to, "answer": [string] The correct answer for the problem, "question_type": [string] The type of question: "multi_choice" or "free_form", "answer_type": [string] The format of the answer: "text", "integer", "float", or "list", "pid": [string] Problem ID, e.g., "1", "metadata": { "split": [string] Data split: "testmini" or "test", "language": [string] Question language: "English", "Chinese", or "Persian", "img_width": [integer] The width of the associated image in pixels, "img_height": [integer] The height of the associated image in pixels, "source": [string] The source dataset from which the problem was taken, "category": [string] The category of the problem: "math-targeted-vqa" or "general-vqa", "task": [string] The task of the problem, e.g., "geometry problem solving", "context": [string] The visual context type of the associated image, "grade": [string] The grade level of the problem, e.g., "high school", "skills": [list] A list of mathematical reasoning skills that the problem tests }, "query": [string] the query text used as input (prompt) for the evaluation model } ``` ### Data Visualization 🎰 You can explore the dataset in an interactive way [here](https://mathvista.github.io/#visualization). <details> <summary>Click to expand/collapse the visualization page screeshot.</summary> <img src="https://raw.githubusercontent.com/lupantech/MathVista/main/assets/data_visualizer.png" style="zoom:40%;" /> </details> ### Data Source The **MathVista** dataset is derived from three newly collected datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. Details can be found in the [source.json](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/source.json) file. All these source datasets have been preprocessed and labeled for evaluation purposes. ### Automatic Evaluation 🔔 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/lupantech/MathVista/tree/main). ## License The new contributions to our dataset are distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license, including - The creation of three datasets: IQTest, FunctionQA, and Paper; - The filtering and cleaning of source datasets; - The standard formalization of instances for evaluation purposes; - The annotations of metadata. The copyright of the images and the questions belongs to the original authors, and the source of every image and original question can be found in the `metadata` field and in the [source.json](https://huggingface.co/datasets/AI4Math/MathVista/blob/main/source.json) file. Alongside this license, the following conditions apply: - **Purpose:** The dataset was primarily designed for use as a test set. - **Commercial Use:** The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license. ## Citation If you use the **MathVista** dataset in your work, please kindly cite the paper using this BibTeX: ``` @inproceedings{lu2024mathvista, author = {Lu, Pan and Bansal, Hritik and Xia, Tony and Liu, Jiacheng and Li, Chunyuan and Hajishirzi, Hannaneh and Cheng, Hao and Chang, Kai-Wei and Galley, Michel and Gao, Jianfeng}, title = {MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts}, booktitle = {International Conference on Learning Representations (ICLR)}, year = {2024} } ```
AI4Math/MathVista
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:visual-question-answering", "task_categories:text-classification", "task_ids:multiple-choice-qa", "task_ids:closed-domain-qa", "task_ids:open-domain-qa", "task_ids:visual-question-answering", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:zh", "language:fa", "license:cc-by-sa-4.0", "multi-modal-qa", "math-qa", "figure-qa", "geometry-qa", "math-word-problem", "textbook-qa", "vqa", "arithmetic-reasoning", "statistical-reasoning", "algebraic-reasoning", "geometry-reasoning", "numeric-common-sense", "scientific-reasoning", "logical-reasoning", "geometry-diagram", "synthetic-scene", "chart", "plot", "scientific-figure", "table", "function-plot", "abstract-scene", "puzzle-test", "document-image", "medical-image", "mathematics", "science", "chemistry", "biology", "physics", "engineering", "natural-science", "arxiv:2310.02255", "region:us" ]
2023-10-15T16:49:10+00:00
{"annotations_creators": ["expert-generated", "found"], "language_creators": ["expert-generated", "found"], "language": ["en", "zh", "fa"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "question-answering", "visual-question-answering", "text-classification"], "task_ids": ["multiple-choice-qa", "closed-domain-qa", "open-domain-qa", "visual-question-answering", "multi-class-classification"], "paperswithcode_id": "mathvista", "pretty_name": "MathVista", "tags": ["multi-modal-qa", "math-qa", "figure-qa", "geometry-qa", "math-word-problem", "textbook-qa", "vqa", "arithmetic-reasoning", "statistical-reasoning", "algebraic-reasoning", "geometry-reasoning", "numeric-common-sense", "scientific-reasoning", "logical-reasoning", "geometry-diagram", "synthetic-scene", "chart", "plot", "scientific-figure", "table", "function-plot", "abstract-scene", "puzzle-test", "document-image", "medical-image", "mathematics", "science", "chemistry", "biology", "physics", "engineering", "natural-science"], "configs": [{"config_name": "default", "data_files": [{"split": "testmini", "path": "data/testmini-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "pid", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "decoded_image", "dtype": "image"}, {"name": "choices", "sequence": "string"}, {"name": "unit", "dtype": "string"}, {"name": "precision", "dtype": "float64"}, {"name": "answer", "dtype": "string"}, {"name": "question_type", "dtype": "string"}, {"name": "answer_type", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "category", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "grade", "dtype": "string"}, {"name": "img_height", "dtype": "int64"}, {"name": "img_width", "dtype": "int64"}, {"name": "language", "dtype": "string"}, {"name": "skills", "sequence": "string"}, {"name": "source", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "task", "dtype": "string"}]}, {"name": "query", "dtype": "string"}], "splits": [{"name": "testmini", "num_bytes": 142635198.0, "num_examples": 1000}, {"name": "test", "num_bytes": 648291350.22, "num_examples": 5141}], "download_size": 885819490, "dataset_size": 790926548.22}}
2024-02-11T23:09:05+00:00
[ "2310.02255" ]
[ "en", "zh", "fa" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #task_categories-text-classification #task_ids-multiple-choice-qa #task_ids-closed-domain-qa #task_ids-open-domain-qa #task_ids-visual-question-answering #task_ids-multi-class-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-Chinese #language-Persian #license-cc-by-sa-4.0 #multi-modal-qa #math-qa #figure-qa #geometry-qa #math-word-problem #textbook-qa #vqa #arithmetic-reasoning #statistical-reasoning #algebraic-reasoning #geometry-reasoning #numeric-common-sense #scientific-reasoning #logical-reasoning #geometry-diagram #synthetic-scene #chart #plot #scientific-figure #table #function-plot #abstract-scene #puzzle-test #document-image #medical-image #mathematics #science #chemistry #biology #physics #engineering #natural-science #arxiv-2310.02255 #region-us
# Dataset Card for MathVista - Dataset Description - Paper Information - Dataset Examples - Leaderboard - Dataset Usage - Data Downloading - Data Format - Data Visualization - Data Source - Automatic Evaluation - License - Citation ## Dataset Description MathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates 9 MathQA datasets and 19 VQA datasets from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, MathVista includes 6,141 examples collected from 31 different datasets. ## Paper Information - Paper: URL - Code: URL - Project: URL - Visualization: URL - Leaderboard: URL ## Dataset Examples Examples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA: <img src="URL style="zoom:40%;" /> <details> <summary> Click to expand/collapse more examples</summary> Examples of seven mathematical reasoning skills: 1. Arithmetic Reasoning <img src="URL style="zoom:40%;" /> 2. Statistical Reasoning <img src="URL style="zoom:40%;" /> 3. Algebraic Reasoning <img src="URL style="zoom:40%;" /> 4. Geometry Reasoning <img src="URL style="zoom:40%;" /> 5. Numeric common sense <img src="URL style="zoom:40%;" /> 6. Scientific Reasoning <img src="URL style="zoom:40%;" /> 7. Logical Reasoning <img src="URL style="zoom:40%;" /> </details> ## Leaderboard The leaderboard for the *testmini* set (1,000 examples) is available here. The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on CodaLab are under construction. ## Dataset Usage ### Data Downloading All the data examples were divided into two subsets: *testmini* and *test*. - testmini: 1,000 examples used for model development, validation, or for those with limited computing resources. - test: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released. You can download this dataset by the following command (make sure that you have installed Huggingface Datasets): Here are some examples of how to access the downloaded dataset: ### Data Format The dataset is provided in json format and contains the following attributes: ### Data Visualization You can explore the dataset in an interactive way here. <details> <summary>Click to expand/collapse the visualization page screeshot.</summary> <img src="URL style="zoom:40%;" /> </details> ### Data Source The MathVista dataset is derived from three newly collected datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. Details can be found in the URL file. All these source datasets have been preprocessed and labeled for evaluation purposes. ### Automatic Evaluation To automatically evaluate a model on the dataset, please refer to our GitHub repository here. ## License The new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including - The creation of three datasets: IQTest, FunctionQA, and Paper; - The filtering and cleaning of source datasets; - The standard formalization of instances for evaluation purposes; - The annotations of metadata. The copyright of the images and the questions belongs to the original authors, and the source of every image and original question can be found in the 'metadata' field and in the URL file. Alongside this license, the following conditions apply: - Purpose: The dataset was primarily designed for use as a test set. - Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license. If you use the MathVista dataset in your work, please kindly cite the paper using this BibTeX:
[ "# Dataset Card for MathVista\n\n- Dataset Description\n- Paper Information\n- Dataset Examples\n- Leaderboard\n- Dataset Usage\n - Data Downloading\n - Data Format\n - Data Visualization\n - Data Source\n - Automatic Evaluation\n- License\n- Citation", "## Dataset Description\n\nMathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates 9 MathQA datasets and 19 VQA datasets from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, MathVista includes 6,141 examples collected from 31 different datasets.", "## Paper Information\n\n- Paper: URL\n- Code: URL\n- Project: URL\n- Visualization: URL\n- Leaderboard: URL", "## Dataset Examples\n\nExamples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA:\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n<details>\n<summary> Click to expand/collapse more examples</summary>\n\nExamples of seven mathematical reasoning skills:\n\n1. Arithmetic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n2. Statistical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n3. Algebraic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n4. Geometry Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n5. Numeric common sense\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n6. Scientific Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n7. Logical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n</details>", "## Leaderboard\n\n The leaderboard for the *testmini* set (1,000 examples) is available here.\n\n The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on CodaLab are under construction.", "## Dataset Usage", "### Data Downloading\n\nAll the data examples were divided into two subsets: *testmini* and *test*.\n\n- testmini: 1,000 examples used for model development, validation, or for those with limited computing resources.\n- test: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.\n\nYou can download this dataset by the following command (make sure that you have installed Huggingface Datasets):\n\n\n\nHere are some examples of how to access the downloaded dataset:", "### Data Format\n\nThe dataset is provided in json format and contains the following attributes:", "### Data Visualization\n\n You can explore the dataset in an interactive way here.\n\n<details>\n<summary>Click to expand/collapse the visualization page screeshot.</summary>\n<img src=\"URL style=\"zoom:40%;\" />\n</details>", "### Data Source\n\nThe MathVista dataset is derived from three newly collected datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. Details can be found in the URL file. All these source datasets have been preprocessed and labeled for evaluation purposes.", "### Automatic Evaluation\n\n To automatically evaluate a model on the dataset, please refer to our GitHub repository here.", "## License\n\nThe new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including\n\n- The creation of three datasets: IQTest, FunctionQA, and Paper;\n- The filtering and cleaning of source datasets;\n- The standard formalization of instances for evaluation purposes;\n- The annotations of metadata.\n\nThe copyright of the images and the questions belongs to the original authors, and the source of every image and original question can be found in the 'metadata' field and in the URL file. Alongside this license, the following conditions apply:\n\n- Purpose: The dataset was primarily designed for use as a test set.\n- Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license.\n\nIf you use the MathVista dataset in your work, please kindly cite the paper using this BibTeX:" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #task_categories-text-classification #task_ids-multiple-choice-qa #task_ids-closed-domain-qa #task_ids-open-domain-qa #task_ids-visual-question-answering #task_ids-multi-class-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-Chinese #language-Persian #license-cc-by-sa-4.0 #multi-modal-qa #math-qa #figure-qa #geometry-qa #math-word-problem #textbook-qa #vqa #arithmetic-reasoning #statistical-reasoning #algebraic-reasoning #geometry-reasoning #numeric-common-sense #scientific-reasoning #logical-reasoning #geometry-diagram #synthetic-scene #chart #plot #scientific-figure #table #function-plot #abstract-scene #puzzle-test #document-image #medical-image #mathematics #science #chemistry #biology #physics #engineering #natural-science #arxiv-2310.02255 #region-us \n", "# Dataset Card for MathVista\n\n- Dataset Description\n- Paper Information\n- Dataset Examples\n- Leaderboard\n- Dataset Usage\n - Data Downloading\n - Data Format\n - Data Visualization\n - Data Source\n - Automatic Evaluation\n- License\n- Citation", "## Dataset Description\n\nMathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates 9 MathQA datasets and 19 VQA datasets from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, MathVista includes 6,141 examples collected from 31 different datasets.", "## Paper Information\n\n- Paper: URL\n- Code: URL\n- Project: URL\n- Visualization: URL\n- Leaderboard: URL", "## Dataset Examples\n\nExamples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA:\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n<details>\n<summary> Click to expand/collapse more examples</summary>\n\nExamples of seven mathematical reasoning skills:\n\n1. Arithmetic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n2. Statistical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n3. Algebraic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n4. Geometry Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n5. Numeric common sense\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n6. Scientific Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n7. Logical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n</details>", "## Leaderboard\n\n The leaderboard for the *testmini* set (1,000 examples) is available here.\n\n The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on CodaLab are under construction.", "## Dataset Usage", "### Data Downloading\n\nAll the data examples were divided into two subsets: *testmini* and *test*.\n\n- testmini: 1,000 examples used for model development, validation, or for those with limited computing resources.\n- test: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.\n\nYou can download this dataset by the following command (make sure that you have installed Huggingface Datasets):\n\n\n\nHere are some examples of how to access the downloaded dataset:", "### Data Format\n\nThe dataset is provided in json format and contains the following attributes:", "### Data Visualization\n\n You can explore the dataset in an interactive way here.\n\n<details>\n<summary>Click to expand/collapse the visualization page screeshot.</summary>\n<img src=\"URL style=\"zoom:40%;\" />\n</details>", "### Data Source\n\nThe MathVista dataset is derived from three newly collected datasets: IQTest, FunctionQA, and Paper, as well as 28 other source datasets. Details can be found in the URL file. All these source datasets have been preprocessed and labeled for evaluation purposes.", "### Automatic Evaluation\n\n To automatically evaluate a model on the dataset, please refer to our GitHub repository here.", "## License\n\nThe new contributions to our dataset are distributed under the CC BY-SA 4.0 license, including\n\n- The creation of three datasets: IQTest, FunctionQA, and Paper;\n- The filtering and cleaning of source datasets;\n- The standard formalization of instances for evaluation purposes;\n- The annotations of metadata.\n\nThe copyright of the images and the questions belongs to the original authors, and the source of every image and original question can be found in the 'metadata' field and in the URL file. Alongside this license, the following conditions apply:\n\n- Purpose: The dataset was primarily designed for use as a test set.\n- Commercial Use: The dataset can be used commercially as a test set, but using it as a training set is prohibited. By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY-SA 4.0 license.\n\nIf you use the MathVista dataset in your work, please kindly cite the paper using this BibTeX:" ]
[ 396, 52, 161, 25, 233, 49, 5, 119, 21, 65, 71, 29, 235 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-visual-question-answering #task_categories-text-classification #task_ids-multiple-choice-qa #task_ids-closed-domain-qa #task_ids-open-domain-qa #task_ids-visual-question-answering #task_ids-multi-class-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-Chinese #language-Persian #license-cc-by-sa-4.0 #multi-modal-qa #math-qa #figure-qa #geometry-qa #math-word-problem #textbook-qa #vqa #arithmetic-reasoning #statistical-reasoning #algebraic-reasoning #geometry-reasoning #numeric-common-sense #scientific-reasoning #logical-reasoning #geometry-diagram #synthetic-scene #chart #plot #scientific-figure #table #function-plot #abstract-scene #puzzle-test #document-image #medical-image #mathematics #science #chemistry #biology #physics #engineering #natural-science #arxiv-2310.02255 #region-us \n# Dataset Card for MathVista\n\n- Dataset Description\n- Paper Information\n- Dataset Examples\n- Leaderboard\n- Dataset Usage\n - Data Downloading\n - Data Format\n - Data Visualization\n - Data Source\n - Automatic Evaluation\n- License\n- Citation", "passage: ## Dataset Description\n\nMathVista is a consolidated Mathematical reasoning benchmark within Visual contexts. It consists of three newly created datasets, IQTest, FunctionQA, and PaperQA, which address the missing visual domains and are tailored to evaluate logical reasoning on puzzle test figures, algebraic reasoning over functional plots, and scientific reasoning with academic paper figures, respectively. It also incorporates 9 MathQA datasets and 19 VQA datasets from the literature, which significantly enrich the diversity and complexity of visual perception and mathematical reasoning challenges within our benchmark. In total, MathVista includes 6,141 examples collected from 31 different datasets.## Paper Information\n\n- Paper: URL\n- Code: URL\n- Project: URL\n- Visualization: URL\n- Leaderboard: URL## Dataset Examples\n\nExamples of our newly annotated datasets: IQTest, FunctionQA, and PaperQA:\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n<details>\n<summary> Click to expand/collapse more examples</summary>\n\nExamples of seven mathematical reasoning skills:\n\n1. Arithmetic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n2. Statistical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n3. Algebraic Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n4. Geometry Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n5. Numeric common sense\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n6. Scientific Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n7. Logical Reasoning\n\n<img src=\"URL style=\"zoom:40%;\" />\n\n</details>## Leaderboard\n\n The leaderboard for the *testmini* set (1,000 examples) is available here.\n\n The leaderboard for the *test* set (5,141 examples) and the automatic evaluation on CodaLab are under construction.## Dataset Usage### Data Downloading\n\nAll the data examples were divided into two subsets: *testmini* and *test*.\n\n- testmini: 1,000 examples used for model development, validation, or for those with limited computing resources.\n- test: 5,141 examples for standard evaluation. Notably, the answer labels for test will NOT be publicly released.\n\nYou can download this dataset by the following command (make sure that you have installed Huggingface Datasets):\n\n\n\nHere are some examples of how to access the downloaded dataset:### Data Format\n\nThe dataset is provided in json format and contains the following attributes:" ]
30e67d331949f0c9a835387aa4d01e0ba713d9ec
# Dataset Card for "gpt4v-emotion-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
laion/gpt4v-emotion-dataset
[ "region:us" ]
2023-10-15T17:25:14+00:00
{"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "message_id", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 241951.0, "num_examples": 114}], "download_size": 145085, "dataset_size": 241951.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-30T02:28:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gpt4v-emotion-dataset" More Information needed
[ "# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"gpt4v-emotion-dataset\"\n\nMore Information needed" ]
c012a0a731bede7095979ba4b0661172ec2dd57a
# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.03544463087248322, "em_stderr": 0.0018935573437954016, "f1": 0.08440436241610706, "f1_stderr": 0.002470333585036359, "acc": 0.2841357537490134, "acc_stderr": 0.0069604360550053574 }, "harness|drop|3": { "em": 0.03544463087248322, "em_stderr": 0.0018935573437954016, "f1": 0.08440436241610706, "f1_stderr": 0.002470333585036359 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.5682715074980268, "acc_stderr": 0.013920872110010715 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16
[ "region:us" ]
2023-10-15T18:12:38+00:00
{"pretty_name": "Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16", "dataset_summary": "Dataset automatically created during the evaluation run of model [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954016,\n \"f1\": 0.08440436241610706,\n \"f1_stderr\": 0.002470333585036359,\n \"acc\": 0.2841357537490134,\n \"acc_stderr\": 0.0069604360550053574\n },\n \"harness|drop|3\": {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954016,\n \"f1\": 0.08440436241610706,\n \"f1_stderr\": 0.002470333585036359\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5682715074980268,\n \"acc_stderr\": 0.013920872110010715\n }\n}\n```", "repo_url": "https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T19_12_34.050776", "path": ["**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T19_12_34.050776", "path": ["**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T19_12_34.050776", "path": ["**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T19_12_34.050776", "path": ["results_2023-10-15T19-12-34.050776.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T19-12-34.050776.parquet"]}]}]}
2023-10-15T18:12:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T19:12:34.050776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T19:12:34.050776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T19:12:34.050776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 35, 31, 183, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T19:12:34.050776(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
302468a06cd220cd0ab58d221346a7688f2b3be5
# Dataset Card for "Extra-Context" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Brecon/Extra-Context
[ "region:us" ]
2023-10-15T18:20:04+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 211025.71889400922, "num_examples": 173}, {"name": "test", "num_bytes": 53671.28110599078, "num_examples": 44}], "download_size": 142693, "dataset_size": 264697.0}}
2023-10-15T18:30:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Extra-Context" More Information needed
[ "# Dataset Card for \"Extra-Context\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Extra-Context\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Extra-Context\"\n\nMore Information needed" ]
75b61db643635797e89f55f479206f189d3d7f64
# Dataset Card for Evaluation run of chargoddard/llama2-22b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/chargoddard/llama2-22b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [chargoddard/llama2-22b](https://huggingface.co/chargoddard/llama2-22b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_chargoddard__llama2-22b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T19:23:07.867810](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b/blob/main/results_2023-10-15T19-23-07.867810.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0020973154362416107, "em_stderr": 0.00046850650303682974, "f1": 0.06078334731543612, "f1_stderr": 0.0013790362682380892, "acc": 0.4312689350534026, "acc_stderr": 0.010092981888945675 }, "harness|drop|3": { "em": 0.0020973154362416107, "em_stderr": 0.00046850650303682974, "f1": 0.06078334731543612, "f1_stderr": 0.0013790362682380892 }, "harness|gsm8k|5": { "acc": 0.09931766489764973, "acc_stderr": 0.008238371412683961 }, "harness|winogrande|5": { "acc": 0.7632202052091555, "acc_stderr": 0.011947592365207389 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_chargoddard__llama2-22b
[ "region:us" ]
2023-10-15T18:23:11+00:00
{"pretty_name": "Evaluation run of chargoddard/llama2-22b", "dataset_summary": "Dataset automatically created during the evaluation run of model [chargoddard/llama2-22b](https://huggingface.co/chargoddard/llama2-22b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__llama2-22b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T19:23:07.867810](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b/blob/main/results_2023-10-15T19-23-07.867810.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0020973154362416107,\n \"em_stderr\": 0.00046850650303682974,\n \"f1\": 0.06078334731543612,\n \"f1_stderr\": 0.0013790362682380892,\n \"acc\": 0.4312689350534026,\n \"acc_stderr\": 0.010092981888945675\n },\n \"harness|drop|3\": {\n \"em\": 0.0020973154362416107,\n \"em_stderr\": 0.00046850650303682974,\n \"f1\": 0.06078334731543612,\n \"f1_stderr\": 0.0013790362682380892\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09931766489764973,\n \"acc_stderr\": 0.008238371412683961\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.011947592365207389\n }\n}\n```", "repo_url": "https://huggingface.co/chargoddard/llama2-22b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T19_23_07.867810", "path": ["**/details_harness|drop|3_2023-10-15T19-23-07.867810.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T19-23-07.867810.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T19_23_07.867810", "path": ["**/details_harness|gsm8k|5_2023-10-15T19-23-07.867810.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T19-23-07.867810.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T19_23_07.867810", "path": ["**/details_harness|winogrande|5_2023-10-15T19-23-07.867810.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T19-23-07.867810.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T19_23_07.867810", "path": ["results_2023-10-15T19-23-07.867810.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T19-23-07.867810.parquet"]}]}]}
2023-10-15T18:23:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of chargoddard/llama2-22b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model chargoddard/llama2-22b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T19:23:07.867810(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of chargoddard/llama2-22b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama2-22b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T19:23:07.867810(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of chargoddard/llama2-22b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama2-22b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T19:23:07.867810(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 19, 31, 167, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of chargoddard/llama2-22b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model chargoddard/llama2-22b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T19:23:07.867810(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
031c0f54a5bf5ec83af4fa95b56b64bbf99c49e7
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
gg7676/UserNavigation
[ "region:us" ]
2023-10-15T18:48:03+00:00
{}
2023-10-15T19:00:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
27d01cd97ddb5ca295f196b0bd77019bf87eb336
# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/ziqingyang/chinese-alpaca-2-13b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [ziqingyang/chinese-alpaca-2-13b](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T20:22:27.142442](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b/blob/main/results_2023-10-15T20-22-27.142442.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.32728607382550334, "em_stderr": 0.004805279168508311, "f1": 0.4106134647651026, "f1_stderr": 0.004650726360819101, "acc": 0.4307653965208868, "acc_stderr": 0.010243166856230161 }, "harness|drop|3": { "em": 0.32728607382550334, "em_stderr": 0.004805279168508311, "f1": 0.4106134647651026, "f1_stderr": 0.004650726360819101 }, "harness|gsm8k|5": { "acc": 0.10462471569370735, "acc_stderr": 0.008430668082029278 }, "harness|winogrande|5": { "acc": 0.7569060773480663, "acc_stderr": 0.012055665630431043 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b
[ "region:us" ]
2023-10-15T19:22:31+00:00
{"pretty_name": "Evaluation run of ziqingyang/chinese-alpaca-2-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ziqingyang/chinese-alpaca-2-13b](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T20:22:27.142442](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b/blob/main/results_2023-10-15T20-22-27.142442.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.32728607382550334,\n \"em_stderr\": 0.004805279168508311,\n \"f1\": 0.4106134647651026,\n \"f1_stderr\": 0.004650726360819101,\n \"acc\": 0.4307653965208868,\n \"acc_stderr\": 0.010243166856230161\n },\n \"harness|drop|3\": {\n \"em\": 0.32728607382550334,\n \"em_stderr\": 0.004805279168508311,\n \"f1\": 0.4106134647651026,\n \"f1_stderr\": 0.004650726360819101\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10462471569370735,\n \"acc_stderr\": 0.008430668082029278\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7569060773480663,\n \"acc_stderr\": 0.012055665630431043\n }\n}\n```", "repo_url": "https://huggingface.co/ziqingyang/chinese-alpaca-2-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T20_22_27.142442", "path": ["**/details_harness|drop|3_2023-10-15T20-22-27.142442.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T20-22-27.142442.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T20_22_27.142442", "path": ["**/details_harness|gsm8k|5_2023-10-15T20-22-27.142442.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T20-22-27.142442.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T20_22_27.142442", "path": ["**/details_harness|winogrande|5_2023-10-15T20-22-27.142442.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T20-22-27.142442.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T20_22_27.142442", "path": ["results_2023-10-15T20-22-27.142442.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T20-22-27.142442.parquet"]}]}]}
2023-10-15T19:22:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model ziqingyang/chinese-alpaca-2-13b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T20:22:27.142442(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-alpaca-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T20:22:27.142442(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-alpaca-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T20:22:27.142442(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ziqingyang/chinese-alpaca-2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T20:22:27.142442(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
a34f4f3cc17c955c6bee87760398111362ffa956
This is a dataset that is based off of the works of Anne Stokes, it's made using Pirsus Artstation which is trained off of SD 1.5 ...the images have been cropped, touched up, and resized to SD 1.5's base resolutions...512x768 and 768x512. ...you should be able to use kohya or dreambooth to train a lora using this.
sola1ree/AnneStokes
[ "region:us" ]
2023-10-15T19:40:23+00:00
{}
2023-10-15T22:23:21+00:00
[]
[]
TAGS #region-us
This is a dataset that is based off of the works of Anne Stokes, it's made using Pirsus Artstation which is trained off of SD 1.5 ...the images have been cropped, touched up, and resized to SD 1.5's base resolutions...512x768 and 768x512. ...you should be able to use kohya or dreambooth to train a lora using this.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
11030e07c0e846216e954d0fcceba2b1870dda21
# Dataset Card for "CLMTokenizer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dip0323/CLMTokenizer
[ "region:us" ]
2023-10-15T19:48:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 710073276, "num_examples": 1376111}, {"name": "valid", "num_bytes": 7016052, "num_examples": 13597}], "download_size": 314934179, "dataset_size": 717089328}}
2023-10-15T19:48:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CLMTokenizer" More Information needed
[ "# Dataset Card for \"CLMTokenizer\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CLMTokenizer\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"CLMTokenizer\"\n\nMore Information needed" ]
0de03da0b6a7bd42398397350e79de6c252fa554
# Dataset Card for "voxelgym_5c_critic_42x42_28" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Cubpaw/voxelgym_5c_critic_42x42_28
[ "region:us" ]
2023-10-15T20:35:12+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "astar_path", "dtype": "image"}, {"name": "pred_path", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 173483.0, "num_examples": 23}, {"name": "validation", "num_bytes": 37745.0, "num_examples": 5}], "download_size": 50824, "dataset_size": 211228.0}}
2023-10-15T20:35:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "voxelgym_5c_critic_42x42_28" More Information needed
[ "# Dataset Card for \"voxelgym_5c_critic_42x42_28\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"voxelgym_5c_critic_42x42_28\"\n\nMore Information needed" ]
[ 6, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"voxelgym_5c_critic_42x42_28\"\n\nMore Information needed" ]
276951d096b6c705e9ad37605a1c0373e407ce52
# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Yhyu13/llama-30B-hf-openassitant - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Yhyu13/llama-30B-hf-openassitant](https://huggingface.co/Yhyu13/llama-30B-hf-openassitant) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T22:09:11.828298](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant/blob/main/results_2023-10-15T22-09-11.828298.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0014681208053691276, "em_stderr": 0.0003921042190298701, "f1": 0.06332634228187943, "f1_stderr": 0.0013742294190200051, "acc": 0.47445656434133393, "acc_stderr": 0.010516415781576863 }, "harness|drop|3": { "em": 0.0014681208053691276, "em_stderr": 0.0003921042190298701, "f1": 0.06332634228187943, "f1_stderr": 0.0013742294190200051 }, "harness|gsm8k|5": { "acc": 0.14859742228961334, "acc_stderr": 0.009797503180527876 }, "harness|winogrande|5": { "acc": 0.8003157063930545, "acc_stderr": 0.011235328382625849 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant
[ "region:us" ]
2023-10-15T21:09:15+00:00
{"pretty_name": "Evaluation run of Yhyu13/llama-30B-hf-openassitant", "dataset_summary": "Dataset automatically created during the evaluation run of model [Yhyu13/llama-30B-hf-openassitant](https://huggingface.co/Yhyu13/llama-30B-hf-openassitant) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T22:09:11.828298](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant/blob/main/results_2023-10-15T22-09-11.828298.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298701,\n \"f1\": 0.06332634228187943,\n \"f1_stderr\": 0.0013742294190200051,\n \"acc\": 0.47445656434133393,\n \"acc_stderr\": 0.010516415781576863\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298701,\n \"f1\": 0.06332634228187943,\n \"f1_stderr\": 0.0013742294190200051\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14859742228961334,\n \"acc_stderr\": 0.009797503180527876\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625849\n }\n}\n```", "repo_url": "https://huggingface.co/Yhyu13/llama-30B-hf-openassitant", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T22_09_11.828298", "path": ["**/details_harness|drop|3_2023-10-15T22-09-11.828298.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T22-09-11.828298.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T22_09_11.828298", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-09-11.828298.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-09-11.828298.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T22_09_11.828298", "path": ["**/details_harness|winogrande|5_2023-10-15T22-09-11.828298.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T22-09-11.828298.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T22_09_11.828298", "path": ["results_2023-10-15T22-09-11.828298.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T22-09-11.828298.parquet"]}]}]}
2023-10-15T21:09:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Yhyu13/llama-30B-hf-openassitant on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T22:09:11.828298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Yhyu13/llama-30B-hf-openassitant on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:09:11.828298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Yhyu13/llama-30B-hf-openassitant on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:09:11.828298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 26, 31, 174, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Yhyu13/llama-30B-hf-openassitant on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T22:09:11.828298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
7cf5aac57c3efb63997f48b0d4c0dc245367dcdb
# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [togethercomputer/GPT-JT-Moderation-6B](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T22:16:11.352297](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B/blob/main/results_2023-10-15T22-16-11.352297.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.004089765100671141, "em_stderr": 0.0006535802669912847, "f1": 0.041537332214765195, "f1_stderr": 0.0012446539419451222, "acc": 0.3182665708457473, "acc_stderr": 0.008157539670038592 }, "harness|drop|3": { "em": 0.004089765100671141, "em_stderr": 0.0006535802669912847, "f1": 0.041537332214765195, "f1_stderr": 0.0012446539419451222 }, "harness|gsm8k|5": { "acc": 0.009855951478392721, "acc_stderr": 0.0027210765770416634 }, "harness|winogrande|5": { "acc": 0.6266771902131019, "acc_stderr": 0.013594002763035523 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B
[ "region:us" ]
2023-10-15T21:16:14+00:00
{"pretty_name": "Evaluation run of togethercomputer/GPT-JT-Moderation-6B", "dataset_summary": "Dataset automatically created during the evaluation run of model [togethercomputer/GPT-JT-Moderation-6B](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T22:16:11.352297](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B/blob/main/results_2023-10-15T22-16-11.352297.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004089765100671141,\n \"em_stderr\": 0.0006535802669912847,\n \"f1\": 0.041537332214765195,\n \"f1_stderr\": 0.0012446539419451222,\n \"acc\": 0.3182665708457473,\n \"acc_stderr\": 0.008157539670038592\n },\n \"harness|drop|3\": {\n \"em\": 0.004089765100671141,\n \"em_stderr\": 0.0006535802669912847,\n \"f1\": 0.041537332214765195,\n \"f1_stderr\": 0.0012446539419451222\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \"acc_stderr\": 0.0027210765770416634\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6266771902131019,\n \"acc_stderr\": 0.013594002763035523\n }\n}\n```", "repo_url": "https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T22_16_11.352297", "path": ["**/details_harness|drop|3_2023-10-15T22-16-11.352297.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T22-16-11.352297.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T22_16_11.352297", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-16-11.352297.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-16-11.352297.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T22_16_11.352297", "path": ["**/details_harness|winogrande|5_2023-10-15T22-16-11.352297.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T22-16-11.352297.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T22_16_11.352297", "path": ["results_2023-10-15T22-16-11.352297.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T22-16-11.352297.parquet"]}]}]}
2023-10-15T21:16:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model togethercomputer/GPT-JT-Moderation-6B on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T22:16:11.352297(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model togethercomputer/GPT-JT-Moderation-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:16:11.352297(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model togethercomputer/GPT-JT-Moderation-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:16:11.352297(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 22, 31, 170, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model togethercomputer/GPT-JT-Moderation-6B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T22:16:11.352297(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
e1c3d5a1a51981cd6aab8c1ad08023e0691e85f1
# Dataset Card for Evaluation run of Corianas/590m ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Corianas/590m - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Corianas/590m](https://huggingface.co/Corianas/590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Corianas__590m", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-15T22:43:28.791779](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__590m/blob/main/results_2023-10-15T22-43-28.791779.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.010276845637583893, "em_stderr": 0.0010328242665282278, "f1": 0.0602705536912752, "f1_stderr": 0.0016432009705513089, "acc": 0.24228909873484075, "acc_stderr": 0.0074016381223505675 }, "harness|drop|3": { "em": 0.010276845637583893, "em_stderr": 0.0010328242665282278, "f1": 0.0602705536912752, "f1_stderr": 0.0016432009705513089 }, "harness|gsm8k|5": { "acc": 0.000758150113722517, "acc_stderr": 0.0007581501137225333 }, "harness|winogrande|5": { "acc": 0.48382004735595896, "acc_stderr": 0.014045126130978601 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Corianas__590m
[ "region:us" ]
2023-10-15T21:43:32+00:00
{"pretty_name": "Evaluation run of Corianas/590m", "dataset_summary": "Dataset automatically created during the evaluation run of model [Corianas/590m](https://huggingface.co/Corianas/590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__590m\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T22:43:28.791779](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__590m/blob/main/results_2023-10-15T22-43-28.791779.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.010276845637583893,\n \"em_stderr\": 0.0010328242665282278,\n \"f1\": 0.0602705536912752,\n \"f1_stderr\": 0.0016432009705513089,\n \"acc\": 0.24228909873484075,\n \"acc_stderr\": 0.0074016381223505675\n },\n \"harness|drop|3\": {\n \"em\": 0.010276845637583893,\n \"em_stderr\": 0.0010328242665282278,\n \"f1\": 0.0602705536912752,\n \"f1_stderr\": 0.0016432009705513089\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \"acc_stderr\": 0.0007581501137225333\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.48382004735595896,\n \"acc_stderr\": 0.014045126130978601\n }\n}\n```", "repo_url": "https://huggingface.co/Corianas/590m", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T22_43_28.791779", "path": ["**/details_harness|drop|3_2023-10-15T22-43-28.791779.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T22-43-28.791779.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T22_43_28.791779", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-43-28.791779.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-43-28.791779.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T22_43_28.791779", "path": ["**/details_harness|winogrande|5_2023-10-15T22-43-28.791779.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T22-43-28.791779.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_15T22_43_28.791779", "path": ["results_2023-10-15T22-43-28.791779.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T22-43-28.791779.parquet"]}]}]}
2023-10-15T21:43:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Corianas/590m ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Corianas/590m on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-15T22:43:28.791779(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Corianas/590m", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:43:28.791779(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Corianas/590m", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-15T22:43:28.791779(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 15, 31, 163, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Corianas/590m## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Corianas/590m on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T22:43:28.791779(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
d20ceb55f08d5aab0f44c077208409a25c412c6f
<div style='background: #ffeef1; border: 1px solid #fd91a4; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>NSFW</h3> <p style='margin:0'>This dataset is not suitable for use by minors. The dataset contains X-rated/NFSW content.</p> </div> <div style='background: #eefff1; border: 1px solid #a4fd91; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>For Testing Only</h3> <p style='margin:0'>Unless you are running tests, you should use the <a href="https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-curated">curated V3 dataset</a>.</p> </div> # E621 Rising V3: Micro Test Image Dataset * **188** images (35MB) downloaded from `e621.net` (90% of samples), `gelbooru.com`, `danbooru.com`, and `rule34.xxx`
hearmeneigh/e621-rising-v3-micro
[ "not-for-all-audiences", "region:us" ]
2023-10-15T22:26:01+00:00
{"pretty_name": "E621 Rising V3 Micro Test Image Dataset", "dataset_info": {"features": [{"name": "source_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "tags", "sequence": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "selector", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37835842.0, "num_examples": 188}], "download_size": 37637506, "dataset_size": 37835842.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["not-for-all-audiences"]}
2023-10-19T20:00:46+00:00
[]
[]
TAGS #not-for-all-audiences #region-us
<div style='background: #ffeef1; border: 1px solid #fd91a4; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>NSFW</h3> <p style='margin:0'>This dataset is not suitable for use by minors. The dataset contains X-rated/NFSW content.</p> </div> <div style='background: #eefff1; border: 1px solid #a4fd91; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>For Testing Only</h3> <p style='margin:0'>Unless you are running tests, you should use the <a href="URL V3 dataset</a>.</p> </div> # E621 Rising V3: Micro Test Image Dataset * 188 images (35MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'
[ "# E621 Rising V3: Micro Test Image Dataset\n* 188 images (35MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
[ "TAGS\n#not-for-all-audiences #region-us \n", "# E621 Rising V3: Micro Test Image Dataset\n* 188 images (35MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
[ 15, 45 ]
[ "passage: TAGS\n#not-for-all-audiences #region-us \n# E621 Rising V3: Micro Test Image Dataset\n* 188 images (35MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
e079ad73884a25dc3489753a8eacd1691c5dbcd0
# Dataset Card for "arxiv_embeddings_480k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sproos/arxiv_embeddings_480k
[ "region:us" ]
2023-10-15T22:54:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "embedding", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 6351419194, "num_examples": 481271}], "download_size": 6014930006, "dataset_size": 6351419194}}
2023-10-15T23:04:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "arxiv_embeddings_480k" More Information needed
[ "# Dataset Card for \"arxiv_embeddings_480k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"arxiv_embeddings_480k\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"arxiv_embeddings_480k\"\n\nMore Information needed" ]
de6ff156b0943ef561ae369d03f30e0865a6b6d2
All conversations are made up by Mistral 7B. The theme is adventure, haremlit, men's adventure.
twodgirl/haremlit
[ "language:en", "conversational", "adventure", "fantasy", "fiction", "novel", "not-for-all-audiences", "region:us" ]
2023-10-15T23:00:31+00:00
{"language": ["en"], "tags": ["conversational", "adventure", "fantasy", "fiction", "novel", "not-for-all-audiences"]}
2023-10-15T23:18:31+00:00
[]
[ "en" ]
TAGS #language-English #conversational #adventure #fantasy #fiction #novel #not-for-all-audiences #region-us
All conversations are made up by Mistral 7B. The theme is adventure, haremlit, men's adventure.
[]
[ "TAGS\n#language-English #conversational #adventure #fantasy #fiction #novel #not-for-all-audiences #region-us \n" ]
[ 36 ]
[ "passage: TAGS\n#language-English #conversational #adventure #fantasy #fiction #novel #not-for-all-audiences #region-us \n" ]
33b10ac9869447f1c39f171a2243957983145062
<div style='background: #ffeef1; border: 1px solid #fd91a4; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>NSFW</h3> <p style='margin:0'>This dataset is not suitable for use by minors. The dataset contains X-rated/NFSW content.</p> </div> <div style='background: #eefff1; border: 1px solid #a4fd91; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>For Testing Only</h3> <p style='margin:0'>Unless you are running tests, you should use the <a href="https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-curated">curated V3 dataset</a>.</p> </div> # E621 Rising V3: Small Test Image Dataset * **3,747** images (690MB) downloaded from `e621.net` (90% of samples), `gelbooru.com`, `danbooru.com`, and `rule34.xxx`
hearmeneigh/e621-rising-v3-small
[ "not-for-all-audiences", "region:us" ]
2023-10-15T23:38:24+00:00
{"pretty_name": "E621 Rising V3 Small Test Image Dataset", "dataset_info": {"features": [{"name": "source_id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "tags", "sequence": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "selector", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 723782116.878, "num_examples": 3747}], "download_size": 719774705, "dataset_size": 723782116.878}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["not-for-all-audiences"]}
2023-10-19T20:02:34+00:00
[]
[]
TAGS #not-for-all-audiences #region-us
<div style='background: #ffeef1; border: 1px solid #fd91a4; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>NSFW</h3> <p style='margin:0'>This dataset is not suitable for use by minors. The dataset contains X-rated/NFSW content.</p> </div> <div style='background: #eefff1; border: 1px solid #a4fd91; padding:1em; border-radius:3px; margin-bottom:2em;'> <h3 style='margin:0'>For Testing Only</h3> <p style='margin:0'>Unless you are running tests, you should use the <a href="URL V3 dataset</a>.</p> </div> # E621 Rising V3: Small Test Image Dataset * 3,747 images (690MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'
[ "# E621 Rising V3: Small Test Image Dataset\n* 3,747 images (690MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
[ "TAGS\n#not-for-all-audiences #region-us \n", "# E621 Rising V3: Small Test Image Dataset\n* 3,747 images (690MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
[ 15, 47 ]
[ "passage: TAGS\n#not-for-all-audiences #region-us \n# E621 Rising V3: Small Test Image Dataset\n* 3,747 images (690MB) downloaded from 'URL' (90% of samples), 'URL', 'URL', and 'URL'" ]
4d593905fc50a03998ee85c06ca777ca1ede4619
# Dataset Card for Evaluation run of aiplanet/effi-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/aiplanet/effi-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [aiplanet/effi-7b](https://huggingface.co/aiplanet/effi-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_aiplanet__effi-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T00:38:54.872293](https://huggingface.co/datasets/open-llm-leaderboard/details_aiplanet__effi-7b/blob/main/results_2023-10-16T00-38-54.872293.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0014681208053691276, "em_stderr": 0.0003921042190298541, "f1": 0.06146078020134238, "f1_stderr": 0.0013862861484435665, "acc": 0.37858887140948305, "acc_stderr": 0.008690432281689055 }, "harness|drop|3": { "em": 0.0014681208053691276, "em_stderr": 0.0003921042190298541, "f1": 0.06146078020134238, "f1_stderr": 0.0013862861484435665 }, "harness|gsm8k|5": { "acc": 0.03184230477634572, "acc_stderr": 0.004836348558260928 }, "harness|winogrande|5": { "acc": 0.7253354380426204, "acc_stderr": 0.012544516005117185 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_aiplanet__effi-7b
[ "region:us" ]
2023-10-15T23:38:58+00:00
{"pretty_name": "Evaluation run of aiplanet/effi-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [aiplanet/effi-7b](https://huggingface.co/aiplanet/effi-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aiplanet__effi-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T00:38:54.872293](https://huggingface.co/datasets/open-llm-leaderboard/details_aiplanet__effi-7b/blob/main/results_2023-10-16T00-38-54.872293.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298541,\n \"f1\": 0.06146078020134238,\n \"f1_stderr\": 0.0013862861484435665,\n \"acc\": 0.37858887140948305,\n \"acc_stderr\": 0.008690432281689055\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298541,\n \"f1\": 0.06146078020134238,\n \"f1_stderr\": 0.0013862861484435665\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03184230477634572,\n \"acc_stderr\": 0.004836348558260928\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7253354380426204,\n \"acc_stderr\": 0.012544516005117185\n }\n}\n```", "repo_url": "https://huggingface.co/aiplanet/effi-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T00_38_54.872293", "path": ["**/details_harness|drop|3_2023-10-16T00-38-54.872293.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T00-38-54.872293.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T00_38_54.872293", "path": ["**/details_harness|gsm8k|5_2023-10-16T00-38-54.872293.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T00-38-54.872293.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T00_38_54.872293", "path": ["**/details_harness|winogrande|5_2023-10-16T00-38-54.872293.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T00-38-54.872293.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T00_38_54.872293", "path": ["results_2023-10-16T00-38-54.872293.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T00-38-54.872293.parquet"]}]}]}
2023-10-15T23:39:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of aiplanet/effi-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model aiplanet/effi-7b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T00:38:54.872293(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of aiplanet/effi-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model aiplanet/effi-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T00:38:54.872293(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of aiplanet/effi-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model aiplanet/effi-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T00:38:54.872293(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 16, 31, 164, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of aiplanet/effi-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model aiplanet/effi-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T00:38:54.872293(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
cf05eb8b3e54bd40378a38e1d47be20d46b7b320
# Dataset Card for "merged-no-pad-16384" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
shossain/merged-no-pad-16384
[ "region:us" ]
2023-10-15T23:46:28+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 1226064029, "num_examples": 6401}], "download_size": 336720252, "dataset_size": 1226064029}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-11-06T22:00:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "merged-no-pad-16384" More Information needed
[ "# Dataset Card for \"merged-no-pad-16384\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"merged-no-pad-16384\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"merged-no-pad-16384\"\n\nMore Information needed" ]
32902f0796763d65a385292fd773aba30ca2255d
# Dataset Card for Evaluation run of psyche/kollama2-7b-v2 ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/psyche/kollama2-7b-v2 - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [psyche/kollama2-7b-v2](https://huggingface.co/psyche/kollama2-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_psyche__kollama2-7b-v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T01:12:44.878519](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kollama2-7b-v2/blob/main/results_2023-10-16T01-12-44.878519.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.01740771812080537, "em_stderr": 0.0013393597649753845, "f1": 0.10400272651006709, "f1_stderr": 0.0021202520572007394, "acc": 0.41065886057278334, "acc_stderr": 0.009434613134114641 }, "harness|drop|3": { "em": 0.01740771812080537, "em_stderr": 0.0013393597649753845, "f1": 0.10400272651006709, "f1_stderr": 0.0021202520572007394 }, "harness|gsm8k|5": { "acc": 0.06520090978013647, "acc_stderr": 0.006800302989321092 }, "harness|winogrande|5": { "acc": 0.7561168113654302, "acc_stderr": 0.012068923278908189 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_psyche__kollama2-7b-v2
[ "region:us" ]
2023-10-16T00:12:48+00:00
{"pretty_name": "Evaluation run of psyche/kollama2-7b-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [psyche/kollama2-7b-v2](https://huggingface.co/psyche/kollama2-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_psyche__kollama2-7b-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T01:12:44.878519](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kollama2-7b-v2/blob/main/results_2023-10-16T01-12-44.878519.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.01740771812080537,\n \"em_stderr\": 0.0013393597649753845,\n \"f1\": 0.10400272651006709,\n \"f1_stderr\": 0.0021202520572007394,\n \"acc\": 0.41065886057278334,\n \"acc_stderr\": 0.009434613134114641\n },\n \"harness|drop|3\": {\n \"em\": 0.01740771812080537,\n \"em_stderr\": 0.0013393597649753845,\n \"f1\": 0.10400272651006709,\n \"f1_stderr\": 0.0021202520572007394\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06520090978013647,\n \"acc_stderr\": 0.006800302989321092\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7561168113654302,\n \"acc_stderr\": 0.012068923278908189\n }\n}\n```", "repo_url": "https://huggingface.co/psyche/kollama2-7b-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T01_12_44.878519", "path": ["**/details_harness|drop|3_2023-10-16T01-12-44.878519.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T01-12-44.878519.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T01_12_44.878519", "path": ["**/details_harness|gsm8k|5_2023-10-16T01-12-44.878519.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T01-12-44.878519.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T01_12_44.878519", "path": ["**/details_harness|winogrande|5_2023-10-16T01-12-44.878519.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T01-12-44.878519.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T01_12_44.878519", "path": ["results_2023-10-16T01-12-44.878519.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T01-12-44.878519.parquet"]}]}]}
2023-10-16T00:12:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of psyche/kollama2-7b-v2 ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model psyche/kollama2-7b-v2 on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T01:12:44.878519(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of psyche/kollama2-7b-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model psyche/kollama2-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T01:12:44.878519(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of psyche/kollama2-7b-v2", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model psyche/kollama2-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T01:12:44.878519(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 20, 31, 168, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of psyche/kollama2-7b-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model psyche/kollama2-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T01:12:44.878519(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
e8906427de2c50996bae7ba81eef7c6ff9fe7d02
# Dataset Card for "iruca_llama2_1kplus_demo" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xinqiyang/iruca_llama2_1kplus_demo
[ "region:us" ]
2023-10-16T00:47:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1656842, "num_examples": 1015}], "download_size": 969415, "dataset_size": 1656842}}
2023-10-16T00:47:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iruca_llama2_1kplus_demo" More Information needed
[ "# Dataset Card for \"iruca_llama2_1kplus_demo\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iruca_llama2_1kplus_demo\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iruca_llama2_1kplus_demo\"\n\nMore Information needed" ]
bc59f90037cf4f82c56c27a018d578056fe972d3
# Dataset Card for "ai_images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
umm-maybe/ai_images
[ "region:us" ]
2023-10-16T01:00:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "train_dataset"}}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 540439882.0, "num_examples": 304}], "download_size": 540208895, "dataset_size": 540439882.0}}
2023-10-16T01:00:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ai_images" More Information needed
[ "# Dataset Card for \"ai_images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ai_images\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ai_images\"\n\nMore Information needed" ]
23d5dee2a8943999f68ee694f492f5e65939a74a
# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/TaylorAI/Flash-Llama-13B - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [TaylorAI/Flash-Llama-13B](https://huggingface.co/TaylorAI/Flash-Llama-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T02:07:21.607373](https://huggingface.co/datasets/open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B/blob/main/results_2023-10-16T02-07-21.607373.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.0014681208053691276, "em_stderr": 0.00039210421902982666, "f1": 0.0607822986577181, "f1_stderr": 0.0013583957676382913, "acc": 0.43739636770101, "acc_stderr": 0.010228023491905505 }, "harness|drop|3": { "em": 0.0014681208053691276, "em_stderr": 0.00039210421902982666, "f1": 0.0607822986577181, "f1_stderr": 0.0013583957676382913 }, "harness|gsm8k|5": { "acc": 0.10841546626231995, "acc_stderr": 0.008563852506627487 }, "harness|winogrande|5": { "acc": 0.7663772691397001, "acc_stderr": 0.011892194477183524 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B
[ "region:us" ]
2023-10-16T01:07:25+00:00
{"pretty_name": "Evaluation run of TaylorAI/Flash-Llama-13B", "dataset_summary": "Dataset automatically created during the evaluation run of model [TaylorAI/Flash-Llama-13B](https://huggingface.co/TaylorAI/Flash-Llama-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T02:07:21.607373](https://huggingface.co/datasets/open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B/blob/main/results_2023-10-16T02-07-21.607373.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.00039210421902982666,\n \"f1\": 0.0607822986577181,\n \"f1_stderr\": 0.0013583957676382913,\n \"acc\": 0.43739636770101,\n \"acc_stderr\": 0.010228023491905505\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.00039210421902982666,\n \"f1\": 0.0607822986577181,\n \"f1_stderr\": 0.0013583957676382913\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10841546626231995,\n \"acc_stderr\": 0.008563852506627487\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183524\n }\n}\n```", "repo_url": "https://huggingface.co/TaylorAI/Flash-Llama-13B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T02_07_21.607373", "path": ["**/details_harness|drop|3_2023-10-16T02-07-21.607373.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T02-07-21.607373.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T02_07_21.607373", "path": ["**/details_harness|gsm8k|5_2023-10-16T02-07-21.607373.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T02-07-21.607373.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T02_07_21.607373", "path": ["**/details_harness|winogrande|5_2023-10-16T02-07-21.607373.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T02-07-21.607373.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T02_07_21.607373", "path": ["results_2023-10-16T02-07-21.607373.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T02-07-21.607373.parquet"]}]}]}
2023-10-16T01:07:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model TaylorAI/Flash-Llama-13B on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T02:07:21.607373(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TaylorAI/Flash-Llama-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T02:07:21.607373(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model TaylorAI/Flash-Llama-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T02:07:21.607373(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 19, 31, 167, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TaylorAI/Flash-Llama-13B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T02:07:21.607373(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
15b5f48e6ebe7223dc725191134fdeef0edc56fd
# Dataset Card for "iruca_llama2_1kplus" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xinqiyang/iruca_llama2_1kplus
[ "region:us" ]
2023-10-16T01:36:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1657441, "num_examples": 1016}], "download_size": 970789, "dataset_size": 1657441}}
2023-10-16T02:11:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iruca_llama2_1kplus" More Information needed
[ "# Dataset Card for \"iruca_llama2_1kplus\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iruca_llama2_1kplus\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iruca_llama2_1kplus\"\n\nMore Information needed" ]
160b6139fabb0264c06dd13b135b091c3937d22b
# Dataset Card for "iruca_llama2_1kplus_shuffle" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xinqiyang/iruca_llama2_1kplus_shuffle
[ "region:us" ]
2023-10-16T02:28:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1657441.0, "num_examples": 1016}], "download_size": 980826, "dataset_size": 1657441.0}}
2023-10-16T02:28:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "iruca_llama2_1kplus_shuffle" More Information needed
[ "# Dataset Card for \"iruca_llama2_1kplus_shuffle\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"iruca_llama2_1kplus_shuffle\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"iruca_llama2_1kplus_shuffle\"\n\nMore Information needed" ]
a1cc3089bb6d6d44f55ab164c2ba476f12afcbb2
## Overview This dataset contains spoken Arabic digits from 40 speakers from multiple Arab communities and local dialects. It is augmented using various techniques to increase the size of the dataset and improve its diversity. The recordings went through a number of pre-processors to evaluate and process the sound quality using Audacity app. ## Dataset Creation The dataset was created by collecting recordings of the digits 0-9 from 40 speakers from different Arab communities and local dialects. The recordings were collected using a high-quality microphone in a quiet environment. The dataset was then augmented using the following techniques: * **Speed perturbation:** The speed of the recordings was randomly varied by up to 10%. * **Pitch perturbation:** The pitch of the recordings was randomly varied by up to 10%. * **Noise addition:** White noise was added to the recordings at a low level. This augmentation resulted in a total of 3,216 recordings. ## Dataset Statistics The dataset contains the following statistics: * Number of speakers: 40 * Number of dialects: 5 * Number of recordings: 402 * Average recording length: 1 second ## Dataset Split The dataset is split into the following subsets: * Train: 282 recordings * Validation: 80 recordings * Test: 40 recordings ## Dataset Usage This dataset can be used for a variety of tasks, including: * Automatic speech recognition (ASR) * Speaker identification * Language modeling * Dialect identification ## Dataset License This dataset is licensed under the apache-2.0. ## Dataset Citation If you use this dataset in your research, please cite the following paper: ... soon ....
mohnasgbr/spoken-arabic-digits
[ "size_categories:n<1K", "language:ar", "license:apache-2.0", "region:us" ]
2023-10-16T02:37:49+00:00
{"language": ["ar"], "license": "apache-2.0", "size_categories": ["n<1K"], "pretty_name": "SA2Dataset"}
2023-10-16T03:17:06+00:00
[]
[ "ar" ]
TAGS #size_categories-n<1K #language-Arabic #license-apache-2.0 #region-us
## Overview This dataset contains spoken Arabic digits from 40 speakers from multiple Arab communities and local dialects. It is augmented using various techniques to increase the size of the dataset and improve its diversity. The recordings went through a number of pre-processors to evaluate and process the sound quality using Audacity app. ## Dataset Creation The dataset was created by collecting recordings of the digits 0-9 from 40 speakers from different Arab communities and local dialects. The recordings were collected using a high-quality microphone in a quiet environment. The dataset was then augmented using the following techniques: * Speed perturbation: The speed of the recordings was randomly varied by up to 10%. * Pitch perturbation: The pitch of the recordings was randomly varied by up to 10%. * Noise addition: White noise was added to the recordings at a low level. This augmentation resulted in a total of 3,216 recordings. ## Dataset Statistics The dataset contains the following statistics: * Number of speakers: 40 * Number of dialects: 5 * Number of recordings: 402 * Average recording length: 1 second ## Dataset Split The dataset is split into the following subsets: * Train: 282 recordings * Validation: 80 recordings * Test: 40 recordings ## Dataset Usage This dataset can be used for a variety of tasks, including: * Automatic speech recognition (ASR) * Speaker identification * Language modeling * Dialect identification ## Dataset License This dataset is licensed under the apache-2.0. ## Dataset Citation If you use this dataset in your research, please cite the following paper: ... soon ....
[ "## Overview\n\nThis dataset contains spoken Arabic digits from 40 speakers from multiple Arab communities and local dialects. It is augmented using various techniques to increase the size of the dataset and improve its diversity. The recordings went through a number of pre-processors to evaluate and process the sound quality using Audacity app.", "## Dataset Creation\n\nThe dataset was created by collecting recordings of the digits 0-9 from 40 speakers from different Arab communities and local dialects. The recordings were collected using a high-quality microphone in a quiet environment.\n\nThe dataset was then augmented using the following techniques:\n\n* Speed perturbation: The speed of the recordings was randomly varied by up to 10%.\n* Pitch perturbation: The pitch of the recordings was randomly varied by up to 10%.\n* Noise addition: White noise was added to the recordings at a low level.\n\nThis augmentation resulted in a total of 3,216 recordings.", "## Dataset Statistics\n\nThe dataset contains the following statistics:\n\n* Number of speakers: 40\n* Number of dialects: 5\n* Number of recordings: 402 \n* Average recording length: 1 second", "## Dataset Split\n\nThe dataset is split into the following subsets:\n\n* Train: 282 recordings\n* Validation: 80 recordings\n* Test: 40 recordings", "## Dataset Usage\n\nThis dataset can be used for a variety of tasks, including:\n\n* Automatic speech recognition (ASR)\n* Speaker identification\n* Language modeling\n* Dialect identification", "## Dataset License\n\nThis dataset is licensed under the apache-2.0.", "## Dataset Citation\n\nIf you use this dataset in your research, please cite the following paper:\n\n\n... soon ...." ]
[ "TAGS\n#size_categories-n<1K #language-Arabic #license-apache-2.0 #region-us \n", "## Overview\n\nThis dataset contains spoken Arabic digits from 40 speakers from multiple Arab communities and local dialects. It is augmented using various techniques to increase the size of the dataset and improve its diversity. The recordings went through a number of pre-processors to evaluate and process the sound quality using Audacity app.", "## Dataset Creation\n\nThe dataset was created by collecting recordings of the digits 0-9 from 40 speakers from different Arab communities and local dialects. The recordings were collected using a high-quality microphone in a quiet environment.\n\nThe dataset was then augmented using the following techniques:\n\n* Speed perturbation: The speed of the recordings was randomly varied by up to 10%.\n* Pitch perturbation: The pitch of the recordings was randomly varied by up to 10%.\n* Noise addition: White noise was added to the recordings at a low level.\n\nThis augmentation resulted in a total of 3,216 recordings.", "## Dataset Statistics\n\nThe dataset contains the following statistics:\n\n* Number of speakers: 40\n* Number of dialects: 5\n* Number of recordings: 402 \n* Average recording length: 1 second", "## Dataset Split\n\nThe dataset is split into the following subsets:\n\n* Train: 282 recordings\n* Validation: 80 recordings\n* Test: 40 recordings", "## Dataset Usage\n\nThis dataset can be used for a variety of tasks, including:\n\n* Automatic speech recognition (ASR)\n* Speaker identification\n* Language modeling\n* Dialect identification", "## Dataset License\n\nThis dataset is licensed under the apache-2.0.", "## Dataset Citation\n\nIf you use this dataset in your research, please cite the following paper:\n\n\n... soon ...." ]
[ 29, 74, 141, 45, 37, 43, 17, 25 ]
[ "passage: TAGS\n#size_categories-n<1K #language-Arabic #license-apache-2.0 #region-us \n## Overview\n\nThis dataset contains spoken Arabic digits from 40 speakers from multiple Arab communities and local dialects. It is augmented using various techniques to increase the size of the dataset and improve its diversity. The recordings went through a number of pre-processors to evaluate and process the sound quality using Audacity app.## Dataset Creation\n\nThe dataset was created by collecting recordings of the digits 0-9 from 40 speakers from different Arab communities and local dialects. The recordings were collected using a high-quality microphone in a quiet environment.\n\nThe dataset was then augmented using the following techniques:\n\n* Speed perturbation: The speed of the recordings was randomly varied by up to 10%.\n* Pitch perturbation: The pitch of the recordings was randomly varied by up to 10%.\n* Noise addition: White noise was added to the recordings at a low level.\n\nThis augmentation resulted in a total of 3,216 recordings.## Dataset Statistics\n\nThe dataset contains the following statistics:\n\n* Number of speakers: 40\n* Number of dialects: 5\n* Number of recordings: 402 \n* Average recording length: 1 second## Dataset Split\n\nThe dataset is split into the following subsets:\n\n* Train: 282 recordings\n* Validation: 80 recordings\n* Test: 40 recordings## Dataset Usage\n\nThis dataset can be used for a variety of tasks, including:\n\n* Automatic speech recognition (ASR)\n* Speaker identification\n* Language modeling\n* Dialect identification## Dataset License\n\nThis dataset is licensed under the apache-2.0.## Dataset Citation\n\nIf you use this dataset in your research, please cite the following paper:\n\n\n... soon ...." ]
3372868a8827d3a47d282060277a9b1274f4a753
<div align="center"> <img width="640" alt="sargishunanyan/thermostats" src="https://huggingface.co/datasets/sargishunanyan/thermostats/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['housing', 'thermostat'] ``` ### Number of Images ```json {'valid': 35, 'test': 18, 'train': 123} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("sargishunanyan/thermostats", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/yolo-po0ro/thermo-part-3/dataset/1](https://universe.roboflow.com/yolo-po0ro/thermo-part-3/dataset/1?ref=roboflow2huggingface) ### Citation ``` @misc{ thermo-part-3_dataset, title = { Thermo, part 3 Dataset }, type = { Open Source Dataset }, author = { Yolo }, howpublished = { \\url{ https://universe.roboflow.com/yolo-po0ro/thermo-part-3 } }, url = { https://universe.roboflow.com/yolo-po0ro/thermo-part-3 }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2023 }, month = { oct }, note = { visited on 2023-10-18 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on October 16, 2023 at 4:27 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit https://github.com/roboflow/notebooks To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com The dataset includes 176 images. Thermostats are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied.
sargishunanyan/thermostats
[ "task_categories:image-segmentation", "roboflow", "roboflow2huggingface", "region:us" ]
2023-10-16T03:11:07+00:00
{"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface"]}
2023-10-18T15:07:35+00:00
[]
[]
TAGS #task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us
<div align="center"> <img width="640" alt="sargishunanyan/thermostats" src="URL </div> ### Dataset Labels ### Number of Images ### How to Use - Install datasets: - Load the dataset: ### Roboflow Dataset Page URL ### License CC BY 4.0 ### Dataset Summary This dataset was exported via URL on October 16, 2023 at 4:27 AM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time For state of the art Computer Vision training notebooks you can use with this dataset, visit URL To find over 100k other datasets and pre-trained models, visit URL The dataset includes 176 images. Thermostats are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied.
[ "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on October 16, 2023 at 4:27 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 176 images.\nThermostats are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied." ]
[ "TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us \n", "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on October 16, 2023 at 4:27 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 176 images.\nThermostats are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied." ]
[ 28, 5, 5, 18, 8, 6, 201 ]
[ "passage: TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us \n### Dataset Labels### Number of Images### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:### Roboflow Dataset Page\nURL### License\nCC BY 4.0### Dataset Summary\nThis dataset was exported via URL on October 16, 2023 at 4:27 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 176 images.\nThermostats are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied." ]
1d605c2880d3163230d04a34ef39728f516c7d3c
## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation Code and data accompanying the manuscript titled "Multi-fidelity climate model parameterization for better generalization and extrapolation", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine. ## Abstract Machine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate to warmer climates leveraging abundant low-fidelity data and a higher accuracy using resolving high-fidelity data. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources, while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change. ## Citation @article{Bhouri2023MF_RPN_cv_param, title = {Multi-fidelity climate model parameterization for better generalization and extrapolation}, author = {Bhouri, Mohamed Aziz and Peng, Liran and Pritchard, Michael S. and Gentine, Pierre }, journal = {arXiv preprint arXiv:2309.10231}, doi = {https://doi.org/10.48550/arXiv.2309.10231}, year = {2023}, } - The code was tested using the jax version 0.3.13, the jaxlib version 0.3.10, the numpy version 1.20.1 and the scipy version 1.7.2. - All codes names intentionally start with numbers in order to make the processing order needed to run them easier to follow: ##################################################################################################### 1. Files "0_data_process_CAM5.py" and "0_data_process_SPCAM5.py" process the raw data generated by CESM2.1.3 CAM5 and SPCAM5 models. In particular, chosen variables given the problem of interest are kept and a temporal subsampling of factor 2 is implemented. In addition, data is concatenated over several days in order to reduce the number of final files. The number of days considered for concatenation is determined by how much memory is available for the hardware on which the scripts are run. "0_data_process_CAM5.py" is used to process CAM5 +4K and +8K data and the resulting files are saved under folders "data_CAM5_4K" and "data_CAM5_8K" respectively. "0_data_process_SPCAM5.py" is used to process SPCAM5 historical and +4K data and the resulting files are saved under folders "data_SPCAM5_hist" and "data_SPCAM5_4K" respectively. ##################################################################################################### 2. File "1_create_train_test.py" creates train and test datasets with only the final relevant variables for the convection parameterization (see manuscript). Datasets are concatenated along the whole time period. Scripts in step 1 are needed since these codes are run on all GCM outputs which are relatively expensive in terms of memory. Hence a concatenation over several months by directly loading all GCM outputs is not doable given our available hardware. Therefore we needed this two-step approach for data concatenation. "1_create_train_test.py" creates the high-fidelity training (SPCAM5 historical run for 3 month) and testing (SPCAM5 +4K for a year) datasets. It also creates the two candidate low-fidelity training datasets (CAM5 +4K and +8K for a year). ##################################################################################################### 3. File "2_candle_plots_data_distr.py" shows the data distribution for the 5 pressure levels 137, 259, 494, 761 and 958 hPa, for the heat tendency and specific humidity, and for the highest pressure level (lowest altitude) for the moisture tendency. It creates the candle plots corresponding to these data distributions and available in the manuscript ("candle_plots_5_pr_lvls_heat_tend_and_spec_hum.png" and "candle_plots_1st_lvl_SS_moist_tend.png"). ##################################################################################################### 4. File "2_norm.py" computes and saves the mean and standard deviation for parameterization inputs and outputs based on low-fidelity training data (CAM5 +8K simulation of a year) and high-fidelity training data (SPCAM historical run for a period of three months). The results are saved in folder "norm". ##################################################################################################### 5. Files" "3_train_RPN_MF.py" and "3_train_RPN_SF.py" train the multi- and single-fidelity models and save their parameters in folders "MF_param" and "SF_param" respectively. The number of models to be trained in parallel by running any of the scripts once is fixed by the variable "ensemble_size". Given the available hardware, we had to use "ensemble_size=1" since we could only access singular nodes and we varied "n_run_param" from 0 to 127. However, we were able to access multiple single nodes independently and hence the training is conducted in parallel ultimately. "3_train_RPN_SF.py" is also used to train the deterministic model by making the variable "N_rpn_SF" equal to "N_tot_SF" in order to use all training data and by changing the subfolder within "SF_param" where the parameters are saved. ##################################################################################################### 6. File "4_concat_param.py" concatenates the parameters so that it corresponds to parameters that would be saved if 128 NNs are trained with a singular run of the scripts detailed in point 5. The size of resulting individual files can go up to 134 mb which prevents uploading them into github directly but we wanted to show how a concise parameters representation for RPN is doable. Subsequent scripts use the parameters that were saved separately for each individual RPN member (resulting from point 5 above). ##################################################################################################### 7. File "4_pred_RPN_det.py" computes and saves the deterministic prediction for the test dataset. Files "4_pred_RPN_SF.py", "4_pred_RPN_LF.py" and "4_pred_RPN_MF.py" compute and save predictions for the test dataset obtained for each individual member of SF-RPN, LF-RPN and MF-RPN. We had to perform this step since our hardware did not have enough virtual memory to make the ensemble predictions for 128 million test datapoints. If memory allows, the ensemble predictions can be performed at once by changing the variable "ensemble_size" to the actual ensemble size and then compute related statistics (mean, standard deviation, higher-order moments, etc). ##################################################################################################### 8. Files "5_mean_std_RPN_SF.py", "5_mean_std_RPN_LF.py" and "5_mean_std_RPN_MF.py" compute and save the mean and standard deviation of the ensemble predictions for the test dataset computed and saved in point 7 above. As mentioned above, if memory allows the points 7 and 8 are merged into one step. ##################################################################################################### 9. File "6_reshape_pred_RPN.py" reshapes and saves the deterministic NN prediction for the test dataset, and the mean and standard deviation of the ensemble predictions for the test dataset for SF-RPN, LF-RPN and MF-RPN models. It uses the saved prediction from step 8 and from running the script "4_pred_RPN_det.py" in step 7. File "6_reshape_pred_RPN.py" also reshapes and saves the actual test dataset output. The reshaped tensors are in shape [dim_y x Nt x lat xlon], where dim_y=48 is the output dimension, Nt the total number of time steps for the test dataset, lat=96 the number of latitude points and lon = 144 the number of longitude points. These results are saved in folders "data_SPCAM5_4K", "MF_param" and "SF_param". ##################################################################################################### 10. File "7_global_errors_temporal_errors.py" computes and saves global (if is_glob_err = 1)and temporal errors (if is_temp_MAE = 1 and/oris_temp_r2 = 1) for all models (det NN, SF-RPN, MF-RPN and LF-RPN). Global errors are saved in folder "glob_errors". Temporal errors are plotted and saved in folder "temp_plots". File "7_global_errors_temporal_errors.py" uses the results obtained in point 9. ##################################################################################################### 11. File "7_global_crps.py" computes and saves the CRPS scores for SF-RPN, MF-RPN and LF-RPN. Individual predictions within the ensemble for each of the models need to be reshaped by setting "is_reshape_single_pred = 1", then the corresponding CRPS score is computed and saved in folder "glob_errors' by setting "is_reshape_single_pred = 0". ##################################################################################################### 12. File "7_long_lat_errors.py" computes and saves the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9. ##################################################################################################### 13. File "7_pressure_lat_errors" computes and saves the pressure(altitude)-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9. ##################################################################################################### 14. File "8_plot_global_errors.py" creates the plots for the global errors (MAE, R2 and CRPS) for all models (det NN, SF-RPN, MF-RPN and LF-RPN) using the results obtained in points 10 and 11. The plots are saved in folder "glob_errors". ##################################################################################################### 15. File "8_long_lat_plots.py" creates and saves the plots for the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folder "long_lat_plots" if variable "is_uncert = 0". These plots are based on the results obtained in point 12. File "8_long_lat_plots.py" also creates the plots for the longitude-latitude variations of the uncertainty for SF-RPN, MF-RPN and LF-RPN models if variable "is_uncert = 1". These plots are saved in folder "long_lat_uncert_plots" and are based on results obtained in point 9. ##################################################################################################### 16. File "8_pressure_lat_plots" creates and saves the plots for the pressure(altitude)-latitude variations of R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) under the names "r2_press_lat_heat.png" and "r2_press_lat_moist.png" for heat and moisture tendencies respectively. These plots are based on the results obtained in point 13. ##################################################################################################### 17. File "8_uncertainty_density_plot" creates the plots for the density of uncertainty as a function of error for SF-RPN, MF-RPN and LF-RPN models. These plots are saved in folder "uncertainty_density_plots" and are based on results obtained in point 9. ##################################################################################################### 18. File "9_uncertainty_video.py" creates and saves the videos of complete spatio-temporal evolution of MAEs and returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video.py" uses the results obtained in point 9. ##################################################################################################### 19. File "9_uncertainty_video_daily.py" creates and saves the videos of spatio-temporal evolution of MAEs based on daily-averaged predictions and daily-averaged returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video_daily.py" uses the results obtained in point 9.
MohamedAzizBhouri/MF_RPN_convection_super_param_CAM5_SPCAM5
[ "license:mit", "region:us" ]
2023-10-16T03:35:11+00:00
{"license": "mit"}
2023-10-16T21:15:37+00:00
[]
[]
TAGS #license-mit #region-us
## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation Code and data accompanying the manuscript titled "Multi-fidelity climate model parameterization for better generalization and extrapolation", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine. ## Abstract Machine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate to warmer climates leveraging abundant low-fidelity data and a higher accuracy using resolving high-fidelity data. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources, while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change. @article{Bhouri2023MF_RPN_cv_param, title = {Multi-fidelity climate model parameterization for better generalization and extrapolation}, author = {Bhouri, Mohamed Aziz and Peng, Liran and Pritchard, Michael S. and Gentine, Pierre }, journal = {arXiv preprint arXiv:2309.10231}, doi = {URL year = {2023}, } - The code was tested using the jax version 0.3.13, the jaxlib version 0.3.10, the numpy version 1.20.1 and the scipy version 1.7.2. - All codes names intentionally start with numbers in order to make the processing order needed to run them easier to follow: ##################################################################################################### 1. Files "0_data_process_CAM5.py" and "0_data_process_SPCAM5.py" process the raw data generated by CESM2.1.3 CAM5 and SPCAM5 models. In particular, chosen variables given the problem of interest are kept and a temporal subsampling of factor 2 is implemented. In addition, data is concatenated over several days in order to reduce the number of final files. The number of days considered for concatenation is determined by how much memory is available for the hardware on which the scripts are run. "0_data_process_CAM5.py" is used to process CAM5 +4K and +8K data and the resulting files are saved under folders "data_CAM5_4K" and "data_CAM5_8K" respectively. "0_data_process_SPCAM5.py" is used to process SPCAM5 historical and +4K data and the resulting files are saved under folders "data_SPCAM5_hist" and "data_SPCAM5_4K" respectively. ##################################################################################################### 2. File "1_create_train_test.py" creates train and test datasets with only the final relevant variables for the convection parameterization (see manuscript). Datasets are concatenated along the whole time period. Scripts in step 1 are needed since these codes are run on all GCM outputs which are relatively expensive in terms of memory. Hence a concatenation over several months by directly loading all GCM outputs is not doable given our available hardware. Therefore we needed this two-step approach for data concatenation. "1_create_train_test.py" creates the high-fidelity training (SPCAM5 historical run for 3 month) and testing (SPCAM5 +4K for a year) datasets. It also creates the two candidate low-fidelity training datasets (CAM5 +4K and +8K for a year). ##################################################################################################### 3. File "2_candle_plots_data_distr.py" shows the data distribution for the 5 pressure levels 137, 259, 494, 761 and 958 hPa, for the heat tendency and specific humidity, and for the highest pressure level (lowest altitude) for the moisture tendency. It creates the candle plots corresponding to these data distributions and available in the manuscript ("candle_plots_5_pr_lvls_heat_tend_and_spec_hum.png" and "candle_plots_1st_lvl_SS_moist_tend.png"). ##################################################################################################### 4. File "2_norm.py" computes and saves the mean and standard deviation for parameterization inputs and outputs based on low-fidelity training data (CAM5 +8K simulation of a year) and high-fidelity training data (SPCAM historical run for a period of three months). The results are saved in folder "norm". ##################################################################################################### 5. Files" "3_train_RPN_MF.py" and "3_train_RPN_SF.py" train the multi- and single-fidelity models and save their parameters in folders "MF_param" and "SF_param" respectively. The number of models to be trained in parallel by running any of the scripts once is fixed by the variable "ensemble_size". Given the available hardware, we had to use "ensemble_size=1" since we could only access singular nodes and we varied "n_run_param" from 0 to 127. However, we were able to access multiple single nodes independently and hence the training is conducted in parallel ultimately. "3_train_RPN_SF.py" is also used to train the deterministic model by making the variable "N_rpn_SF" equal to "N_tot_SF" in order to use all training data and by changing the subfolder within "SF_param" where the parameters are saved. ##################################################################################################### 6. File "4_concat_param.py" concatenates the parameters so that it corresponds to parameters that would be saved if 128 NNs are trained with a singular run of the scripts detailed in point 5. The size of resulting individual files can go up to 134 mb which prevents uploading them into github directly but we wanted to show how a concise parameters representation for RPN is doable. Subsequent scripts use the parameters that were saved separately for each individual RPN member (resulting from point 5 above). ##################################################################################################### 7. File "4_pred_RPN_det.py" computes and saves the deterministic prediction for the test dataset. Files "4_pred_RPN_SF.py", "4_pred_RPN_LF.py" and "4_pred_RPN_MF.py" compute and save predictions for the test dataset obtained for each individual member of SF-RPN, LF-RPN and MF-RPN. We had to perform this step since our hardware did not have enough virtual memory to make the ensemble predictions for 128 million test datapoints. If memory allows, the ensemble predictions can be performed at once by changing the variable "ensemble_size" to the actual ensemble size and then compute related statistics (mean, standard deviation, higher-order moments, etc). ##################################################################################################### 8. Files "5_mean_std_RPN_SF.py", "5_mean_std_RPN_LF.py" and "5_mean_std_RPN_MF.py" compute and save the mean and standard deviation of the ensemble predictions for the test dataset computed and saved in point 7 above. As mentioned above, if memory allows the points 7 and 8 are merged into one step. ##################################################################################################### 9. File "6_reshape_pred_RPN.py" reshapes and saves the deterministic NN prediction for the test dataset, and the mean and standard deviation of the ensemble predictions for the test dataset for SF-RPN, LF-RPN and MF-RPN models. It uses the saved prediction from step 8 and from running the script "4_pred_RPN_det.py" in step 7. File "6_reshape_pred_RPN.py" also reshapes and saves the actual test dataset output. The reshaped tensors are in shape [dim_y x Nt x lat xlon], where dim_y=48 is the output dimension, Nt the total number of time steps for the test dataset, lat=96 the number of latitude points and lon = 144 the number of longitude points. These results are saved in folders "data_SPCAM5_4K", "MF_param" and "SF_param". ##################################################################################################### 10. File "7_global_errors_temporal_errors.py" computes and saves global (if is_glob_err = 1)and temporal errors (if is_temp_MAE = 1 and/oris_temp_r2 = 1) for all models (det NN, SF-RPN, MF-RPN and LF-RPN). Global errors are saved in folder "glob_errors". Temporal errors are plotted and saved in folder "temp_plots". File "7_global_errors_temporal_errors.py" uses the results obtained in point 9. ##################################################################################################### 11. File "7_global_crps.py" computes and saves the CRPS scores for SF-RPN, MF-RPN and LF-RPN. Individual predictions within the ensemble for each of the models need to be reshaped by setting "is_reshape_single_pred = 1", then the corresponding CRPS score is computed and saved in folder "glob_errors' by setting "is_reshape_single_pred = 0". ##################################################################################################### 12. File "7_long_lat_errors.py" computes and saves the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9. ##################################################################################################### 13. File "7_pressure_lat_errors" computes and saves the pressure(altitude)-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9. ##################################################################################################### 14. File "8_plot_global_errors.py" creates the plots for the global errors (MAE, R2 and CRPS) for all models (det NN, SF-RPN, MF-RPN and LF-RPN) using the results obtained in points 10 and 11. The plots are saved in folder "glob_errors". ##################################################################################################### 15. File "8_long_lat_plots.py" creates and saves the plots for the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folder "long_lat_plots" if variable "is_uncert = 0". These plots are based on the results obtained in point 12. File "8_long_lat_plots.py" also creates the plots for the longitude-latitude variations of the uncertainty for SF-RPN, MF-RPN and LF-RPN models if variable "is_uncert = 1". These plots are saved in folder "long_lat_uncert_plots" and are based on results obtained in point 9. ##################################################################################################### 16. File "8_pressure_lat_plots" creates and saves the plots for the pressure(altitude)-latitude variations of R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) under the names "r2_press_lat_heat.png" and "r2_press_lat_moist.png" for heat and moisture tendencies respectively. These plots are based on the results obtained in point 13. ##################################################################################################### 17. File "8_uncertainty_density_plot" creates the plots for the density of uncertainty as a function of error for SF-RPN, MF-RPN and LF-RPN models. These plots are saved in folder "uncertainty_density_plots" and are based on results obtained in point 9. ##################################################################################################### 18. File "9_uncertainty_video.py" creates and saves the videos of complete spatio-temporal evolution of MAEs and returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video.py" uses the results obtained in point 9. ##################################################################################################### 19. File "9_uncertainty_video_daily.py" creates and saves the videos of spatio-temporal evolution of MAEs based on daily-averaged predictions and daily-averaged returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video_daily.py" uses the results obtained in point 9.
[ "## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation\n\nCode and data accompanying the manuscript titled \"Multi-fidelity climate model parameterization for better generalization and extrapolation\", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine.", "## Abstract\n\nMachine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate to warmer climates leveraging abundant low-fidelity data and a higher accuracy using resolving high-fidelity data. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources, while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change.\n@article{Bhouri2023MF_RPN_cv_param,\n title = {Multi-fidelity climate model parameterization for better generalization and extrapolation},\n author = {Bhouri, Mohamed Aziz and Peng, Liran and Pritchard, Michael S. and Gentine, Pierre },\n journal = {arXiv preprint arXiv:2309.10231},\n doi = {URL\n year = {2023},\n }\n \n- The code was tested using the jax version 0.3.13, the jaxlib version 0.3.10, the numpy version 1.20.1 and the scipy version 1.7.2.\n\n- All codes names intentionally start with numbers in order to make the processing order needed to run them easier to follow:" ]
[ "TAGS\n#license-mit #region-us \n", "## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation\n\nCode and data accompanying the manuscript titled \"Multi-fidelity climate model parameterization for better generalization and extrapolation\", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine.", "## Abstract\n\nMachine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate to warmer climates leveraging abundant low-fidelity data and a higher accuracy using resolving high-fidelity data. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources, while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change.\n@article{Bhouri2023MF_RPN_cv_param,\n title = {Multi-fidelity climate model parameterization for better generalization and extrapolation},\n author = {Bhouri, Mohamed Aziz and Peng, Liran and Pritchard, Michael S. and Gentine, Pierre },\n journal = {arXiv preprint arXiv:2309.10231},\n doi = {URL\n year = {2023},\n }\n \n- The code was tested using the jax version 0.3.13, the jaxlib version 0.3.10, the numpy version 1.20.1 and the scipy version 1.7.2.\n\n- All codes names intentionally start with numbers in order to make the processing order needed to run them easier to follow:" ]
[ 11, 74, 443 ]
[ "passage: TAGS\n#license-mit #region-us \n## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation\n\nCode and data accompanying the manuscript titled \"Multi-fidelity climate model parameterization for better generalization and extrapolation\", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine." ]
0111e5c3062b0b72cf3b14465c5bfaf1933fbace
# Dataset Card for "pubchem_bioassay" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
phanvancongthanh/pubchem_bioassay
[ "region:us" ]
2023-10-16T03:41:57+00:00
{"dataset_info": {"features": [{"name": "PUBCHEM_CID", "dtype": "float64"}, {"name": "PUBCHEM_EXT_DATASOURCE_SMILES", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13266669373.336466, "num_examples": 210186056}], "download_size": 6660630004, "dataset_size": 13266669373.336466}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-17T05:51:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "pubchem_bioassay" More Information needed
[ "# Dataset Card for \"pubchem_bioassay\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"pubchem_bioassay\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"pubchem_bioassay\"\n\nMore Information needed" ]
e12b715be7582af64efc596f8402d916fb64f1e3
# Dataset Card for "TID-8" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** placeholder - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary TID-8 is a new aggregated benchmark focused on the task of letting models learn from data that has inherent disagreement proposed in [link](https://arxiv.org/pdf/2305.14663.pdf) at Findings of EMNLP 2023. In the paper, we focus on the inherent disagreement and let the model directly learn from data that has such disagreement. We provide two split for TID-8. *Annotation Split* We split the annotations for each annotator into train and test set. In other words, the same set of annotators appear in both train, (val), and test sets. For datasets that have splits originally, we follow the original split and remove datapoints in test sets that are annotated by an annotator who is not in the training set. For datasets that do not have splits originally, we split the data into train and test set for convenience, you may further split the train set into a train and val set. *Annotator Split* We split annotators into train and test set. In other words, a different set of annotators would appear in train and test sets. We split the data into train and test set for convenience, you may consider further splitting the train set into a train and val set for performance validation. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances ### Data Fields The data fields are the same among all splits. See aforementioned information. ### Data Splits See aforementioned information. ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{deng2023tid8, title={You Are What You Annotate: Towards Better Models through Annotator Representations}, author={Deng, Naihao and Liu, Siyang and Zhang, Frederick Xinliang and Wu, Winston and Wang, Lu and Mihalcea, Rada}, booktitle={Findings of EMNLP 2023}, year={2023} } Note that each TID-8 dataset has its own citation. Please see the source to get the correct citation for each contained dataset. ```
MichiganNLP/TID-8
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:sentiment-analysis", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<200K", "source_datasets:extended|other", "language:en", "license:unknown", "tid8", "annotation disagreement", "arxiv:2305.14663", "region:us" ]
2023-10-16T03:50:43+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["other"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<200K"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "sentiment-analysis", "hate-speech-detection"], "paperswithcode_id": "placeholder", "pretty_name": "TID-8", "tags": ["tid8", "annotation disagreement"], "dataset_info": [{"config_name": "commitmentbank-ann", "features": [{"name": "HitID", "dtype": "string"}, {"name": "Verb", "dtype": "string"}, {"name": "Context", "dtype": "string"}, {"name": "Prompt", "dtype": "string"}, {"name": "Target", "dtype": "string"}, {"name": "ModalType", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}, {"name": "MatTense", "dtype": "string"}, {"name": "weak_labels", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "-3", "5": "-1", "6": "-2"}}}}], "splits": [{"name": "train", "num_bytes": 7153364, "num_examples": 7816}, {"name": "test", "num_bytes": 3353745, "num_examples": 3729}], "download_size": 3278616, "dataset_size": 10507109}, {"config_name": "commitmentbank-atr", "features": [{"name": "HitID", "dtype": "string"}, {"name": "Verb", "dtype": "string"}, {"name": "Context", "dtype": "string"}, {"name": "Prompt", "dtype": "string"}, {"name": "Target", "dtype": "string"}, {"name": "ModalType", "dtype": "string"}, {"name": "Embedding", "dtype": "string"}, {"name": "MatTense", "dtype": "string"}, {"name": "weak_labels", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "-3", "5": "-1", "6": "-2"}}}}], "splits": [{"name": "train", "num_bytes": 6636145, "num_examples": 7274}, {"name": "test", "num_bytes": 3870964, "num_examples": 4271}], "download_size": 3301698, "dataset_size": 10507109}, {"config_name": "friends_qia-ann", "features": [{"name": "Season", "dtype": "string"}, {"name": "Episode", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Q_person", "dtype": "string"}, {"name": "A_person", "dtype": "string"}, {"name": "Q_original", "dtype": "string"}, {"name": "Q_modified", "dtype": "string"}, {"name": "A_modified", "dtype": "string"}, {"name": "Annotation_1", "dtype": "string"}, {"name": "Annotation_2", "dtype": "string"}, {"name": "Annotation_3", "dtype": "string"}, {"name": "Goldstandard", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "splits": [{"name": "validation", "num_bytes": 687135, "num_examples": 1872}, {"name": "train", "num_bytes": 4870170, "num_examples": 13113}, {"name": "test", "num_bytes": 693033, "num_examples": 1872}], "download_size": 1456765, "dataset_size": 6250338}, {"config_name": "friends_qia-atr", "features": [{"name": "Season", "dtype": "string"}, {"name": "Episode", "dtype": "string"}, {"name": "Category", "dtype": "string"}, {"name": "Q_person", "dtype": "string"}, {"name": "A_person", "dtype": "string"}, {"name": "Q_original", "dtype": "string"}, {"name": "Q_modified", "dtype": "string"}, {"name": "A_modified", "dtype": "string"}, {"name": "Annotation_1", "dtype": "string"}, {"name": "Annotation_2", "dtype": "string"}, {"name": "Annotation_3", "dtype": "string"}, {"name": "Goldstandard", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "1", "1": "2", "2": "3", "3": "4", "4": "5"}}}}], "splits": [{"name": "train", "num_bytes": 4166892, "num_examples": 11238}, {"name": "test", "num_bytes": 2083446, "num_examples": 5619}], "download_size": 3445839, "dataset_size": 6250338}, {"config_name": "goemotions-ann", "features": [{"name": "author", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "link_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "created_utc", "dtype": "string"}, {"name": "rater_id", "dtype": "string"}, {"name": "example_very_unclear", "dtype": "string"}, {"name": "admiration", "dtype": "string"}, {"name": "amusement", "dtype": "string"}, {"name": "anger", "dtype": "string"}, {"name": "annoyance", "dtype": "string"}, {"name": "approval", "dtype": "string"}, {"name": "caring", "dtype": "string"}, {"name": "confusion", "dtype": "string"}, {"name": "curiosity", "dtype": "string"}, {"name": "desire", "dtype": "string"}, {"name": "disappointment", "dtype": "string"}, {"name": "disapproval", "dtype": "string"}, {"name": "disgust", "dtype": "string"}, {"name": "embarrassment", "dtype": "string"}, {"name": "excitement", "dtype": "string"}, {"name": "fear", "dtype": "string"}, {"name": "gratitude", "dtype": "string"}, {"name": "grief", "dtype": "string"}, {"name": "joy", "dtype": "string"}, {"name": "love", "dtype": "string"}, {"name": "nervousness", "dtype": "string"}, {"name": "optimism", "dtype": "string"}, {"name": "pride", "dtype": "string"}, {"name": "realization", "dtype": "string"}, {"name": "relief", "dtype": "string"}, {"name": "remorse", "dtype": "string"}, {"name": "sadness", "dtype": "string"}, {"name": "surprise", "dtype": "string"}, {"name": "neutral", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "positive", "1": "ambiguous", "2": "negative", "3": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 46277072, "num_examples": 135504}, {"name": "test", "num_bytes": 19831033, "num_examples": 58129}], "download_size": 24217871, "dataset_size": 66108105}, {"config_name": "goemotions-atr", "features": [{"name": "author", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "link_id", "dtype": "string"}, {"name": "parent_id", "dtype": "string"}, {"name": "created_utc", "dtype": "string"}, {"name": "rater_id", "dtype": "string"}, {"name": "example_very_unclear", "dtype": "string"}, {"name": "admiration", "dtype": "string"}, {"name": "amusement", "dtype": "string"}, {"name": "anger", "dtype": "string"}, {"name": "annoyance", "dtype": "string"}, {"name": "approval", "dtype": "string"}, {"name": "caring", "dtype": "string"}, {"name": "confusion", "dtype": "string"}, {"name": "curiosity", "dtype": "string"}, {"name": "desire", "dtype": "string"}, {"name": "disappointment", "dtype": "string"}, {"name": "disapproval", "dtype": "string"}, {"name": "disgust", "dtype": "string"}, {"name": "embarrassment", "dtype": "string"}, {"name": "excitement", "dtype": "string"}, {"name": "fear", "dtype": "string"}, {"name": "gratitude", "dtype": "string"}, {"name": "grief", "dtype": "string"}, {"name": "joy", "dtype": "string"}, {"name": "love", "dtype": "string"}, {"name": "nervousness", "dtype": "string"}, {"name": "optimism", "dtype": "string"}, {"name": "pride", "dtype": "string"}, {"name": "realization", "dtype": "string"}, {"name": "relief", "dtype": "string"}, {"name": "remorse", "dtype": "string"}, {"name": "sadness", "dtype": "string"}, {"name": "surprise", "dtype": "string"}, {"name": "neutral", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "positive", "1": "ambiguous", "2": "negative", "3": "neutral"}}}}], "splits": [{"name": "train", "num_bytes": 44856233, "num_examples": 131395}, {"name": "test", "num_bytes": 21251872, "num_examples": 62238}], "download_size": 24228953, "dataset_size": 66108105}, {"config_name": "hs_brexit-ann", "features": [{"name": "other annotations", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "hate_speech", "1": "not_hate_speech"}}}}], "splits": [{"name": "train", "num_bytes": 1039008, "num_examples": 4704}, {"name": "test", "num_bytes": 222026, "num_examples": 1008}], "download_size": 144072, "dataset_size": 1261034}, {"config_name": "hs_brexit-atr", "features": [{"name": "other annotations", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "hate_speech", "1": "not_hate_speech"}}}}], "splits": [{"name": "train", "num_bytes": 986132, "num_examples": 4480}, {"name": "test", "num_bytes": 495738, "num_examples": 2240}], "download_size": 604516, "dataset_size": 1481870}, {"config_name": "humor-ann", "features": [{"name": "text_a", "dtype": "string"}, {"name": "text_b", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "B", "1": "X", "2": "A"}}}}], "splits": [{"name": "train", "num_bytes": 28524839, "num_examples": 98735}, {"name": "test", "num_bytes": 12220621, "num_examples": 42315}], "download_size": 24035118, "dataset_size": 40745460}, {"config_name": "humor-atr", "features": [{"name": "text_a", "dtype": "string"}, {"name": "text_b", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "B", "1": "X", "2": "A"}}}}], "splits": [{"name": "train", "num_bytes": 28161248, "num_examples": 97410}, {"name": "test", "num_bytes": 12584212, "num_examples": 43640}], "download_size": 24099282, "dataset_size": 40745460}, {"config_name": "md-agreement-ann", "features": [{"name": "task", "dtype": "string"}, {"name": "original_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "offensive_speech", "1": "not_offensive_speech"}}}}], "splits": [{"name": "train", "num_bytes": 7794988, "num_examples": 32960}, {"name": "test", "num_bytes": 2498445, "num_examples": 10553}], "download_size": 1606671, "dataset_size": 10293433}, {"config_name": "md-agreement-atr", "features": [{"name": "task", "dtype": "string"}, {"name": "original_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "offensive_speech", "1": "not_offensive_speech"}}}}], "splits": [{"name": "train", "num_bytes": 8777085, "num_examples": 37077}, {"name": "test", "num_bytes": 3957021, "num_examples": 16688}], "download_size": 5766114, "dataset_size": 12734106}, {"config_name": "pejorative-ann", "features": [{"name": "pejor_word", "dtype": "string"}, {"name": "word_definition", "dtype": "string"}, {"name": "annotator-1", "dtype": "string"}, {"name": "annotator-2", "dtype": "string"}, {"name": "annotator-3", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "pejorative", "1": "non-pejorative", "2": "undecided"}}}}], "splits": [{"name": "train", "num_bytes": 350734, "num_examples": 1535}, {"name": "test", "num_bytes": 150894, "num_examples": 659}], "download_size": 168346, "dataset_size": 501628}, {"config_name": "pejorative-atr", "features": [{"name": "pejor_word", "dtype": "string"}, {"name": "word_definition", "dtype": "string"}, {"name": "annotator-1", "dtype": "string"}, {"name": "annotator-2", "dtype": "string"}, {"name": "annotator-3", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "pejorative", "1": "non-pejorative", "2": "undecided"}}}}], "splits": [{"name": "train", "num_bytes": 254138, "num_examples": 1112}, {"name": "test", "num_bytes": 247490, "num_examples": 1082}], "download_size": 188229, "dataset_size": 501628}, {"config_name": "sentiment-ann", "features": [{"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "Neutral", "1": "Somewhat positive", "2": "Very negative", "3": "Somewhat negative", "4": "Very positive"}}}}], "splits": [{"name": "train", "num_bytes": 9350333, "num_examples": 59235}, {"name": "test", "num_bytes": 235013, "num_examples": 1419}], "download_size": 4906597, "dataset_size": 9585346}, {"config_name": "sentiment-atr", "features": [{"name": "question", "dtype": "string"}, {"name": "uid", "dtype": "string"}, {"name": "id", "dtype": "int32"}, {"name": "annotator_id", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "answer_label", "dtype": {"class_label": {"names": {"0": "Neutral", "1": "Somewhat positive", "2": "Very negative", "3": "Somewhat negative", "4": "Very positive"}}}}], "splits": [{"name": "train", "num_bytes": 6712084, "num_examples": 42439}, {"name": "test", "num_bytes": 2873262, "num_examples": 18215}], "download_size": 4762021, "dataset_size": 9585346}], "configs": [{"config_name": "commitmentbank-ann", "data_files": [{"split": "train", "path": "commitmentbank-ann/train-*"}, {"split": "test", "path": "commitmentbank-ann/test-*"}]}, {"config_name": "commitmentbank-atr", "data_files": [{"split": "train", "path": "commitmentbank-atr/train-*"}, {"split": "test", "path": "commitmentbank-atr/test-*"}]}, {"config_name": "friends_qia-ann", "data_files": [{"split": "validation", "path": "friends_qia-ann/validation-*"}, {"split": "train", "path": "friends_qia-ann/train-*"}, {"split": "test", "path": "friends_qia-ann/test-*"}]}, {"config_name": "friends_qia-atr", "data_files": [{"split": "train", "path": "friends_qia-atr/train-*"}, {"split": "test", "path": "friends_qia-atr/test-*"}]}, {"config_name": "goemotions-ann", "data_files": [{"split": "train", "path": "goemotions-ann/train-*"}, {"split": "test", "path": "goemotions-ann/test-*"}]}, {"config_name": "goemotions-atr", "data_files": [{"split": "train", "path": "goemotions-atr/train-*"}, {"split": "test", "path": "goemotions-atr/test-*"}]}, {"config_name": "hs_brexit-ann", "data_files": [{"split": "train", "path": "hs_brexit-ann/train-*"}, {"split": "test", "path": "hs_brexit-ann/test-*"}]}, {"config_name": "hs_brexit-atr", "data_files": [{"split": "train", "path": "hs_brexit-atr/train-*"}, {"split": "test", "path": "hs_brexit-atr/test-*"}]}, {"config_name": "humor-ann", "data_files": [{"split": "train", "path": "humor-ann/train-*"}, {"split": "test", "path": "humor-ann/test-*"}]}, {"config_name": "humor-atr", "data_files": [{"split": "train", "path": "humor-atr/train-*"}, {"split": "test", "path": "humor-atr/test-*"}]}, {"config_name": "md-agreement-ann", "data_files": [{"split": "train", "path": "md-agreement-ann/train-*"}, {"split": "test", "path": "md-agreement-ann/test-*"}]}, {"config_name": "md-agreement-atr", "data_files": [{"split": "train", "path": "md-agreement-atr/train-*"}, {"split": "test", "path": "md-agreement-atr/test-*"}]}, {"config_name": "pejorative-ann", "data_files": [{"split": "train", "path": "pejorative-ann/train-*"}, {"split": "test", "path": "pejorative-ann/test-*"}]}, {"config_name": "pejorative-atr", "data_files": [{"split": "train", "path": "pejorative-atr/train-*"}, {"split": "test", "path": "pejorative-atr/test-*"}]}, {"config_name": "sentiment-ann", "data_files": [{"split": "train", "path": "sentiment-ann/train-*"}, {"split": "test", "path": "sentiment-ann/test-*"}]}, {"config_name": "sentiment-atr", "data_files": [{"split": "train", "path": "sentiment-atr/train-*"}, {"split": "test", "path": "sentiment-atr/test-*"}]}]}
2023-10-30T18:18:31+00:00
[ "2305.14663" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #task_ids-sentiment-analysis #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-1K<n<200K #source_datasets-extended|other #language-English #license-unknown #tid8 #annotation disagreement #arxiv-2305.14663 #region-us
# Dataset Card for "TID-8" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: placeholder - Repository: - Paper: - Point of Contact: ### Dataset Summary TID-8 is a new aggregated benchmark focused on the task of letting models learn from data that has inherent disagreement proposed in link at Findings of EMNLP 2023. In the paper, we focus on the inherent disagreement and let the model directly learn from data that has such disagreement. We provide two split for TID-8. *Annotation Split* We split the annotations for each annotator into train and test set. In other words, the same set of annotators appear in both train, (val), and test sets. For datasets that have splits originally, we follow the original split and remove datapoints in test sets that are annotated by an annotator who is not in the training set. For datasets that do not have splits originally, we split the data into train and test set for convenience, you may further split the train set into a train and val set. *Annotator Split* We split annotators into train and test set. In other words, a different set of annotators would appear in train and test sets. We split the data into train and test set for convenience, you may consider further splitting the train set into a train and val set for performance validation. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields The data fields are the same among all splits. See aforementioned information. ### Data Splits See aforementioned information. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for \"TID-8\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: placeholder\n- Repository: \n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nTID-8 is a new aggregated benchmark focused on the task of letting models learn from data that has inherent disagreement proposed in link at Findings of EMNLP 2023.\nIn the paper, we focus on the inherent disagreement and let the model directly learn from data that has such disagreement.\n\nWe provide two split for TID-8.\n\n*Annotation Split*\n\nWe split the annotations for each annotator into train and test set.\n\nIn other words, the same set of annotators appear in both train, (val),\nand test sets.\n\nFor datasets that have splits originally, we follow the original split and remove\ndatapoints in test sets that are annotated by an annotator who is not in\nthe training set.\n\nFor datasets that do not have splits originally, we split the data into \ntrain and test set for convenience, you may further split the train set\ninto a train and val set.\n\n*Annotator Split*\n\nWe split annotators into train and test set.\n\nIn other words, a different set of annotators would appear in train and test sets.\n\nWe split the data into train and test set for convenience, you may consider\nfurther splitting the train set into a train and val set for performance validation.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nThe data fields are the same among all splits.\nSee aforementioned information.", "### Data Splits\n\nSee aforementioned information.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-sentiment-analysis #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-1K<n<200K #source_datasets-extended|other #language-English #license-unknown #tid8 #annotation disagreement #arxiv-2305.14663 #region-us \n", "# Dataset Card for \"TID-8\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: placeholder\n- Repository: \n- Paper: \n- Point of Contact:", "### Dataset Summary\n\nTID-8 is a new aggregated benchmark focused on the task of letting models learn from data that has inherent disagreement proposed in link at Findings of EMNLP 2023.\nIn the paper, we focus on the inherent disagreement and let the model directly learn from data that has such disagreement.\n\nWe provide two split for TID-8.\n\n*Annotation Split*\n\nWe split the annotations for each annotator into train and test set.\n\nIn other words, the same set of annotators appear in both train, (val),\nand test sets.\n\nFor datasets that have splits originally, we follow the original split and remove\ndatapoints in test sets that are annotated by an annotator who is not in\nthe training set.\n\nFor datasets that do not have splits originally, we split the data into \ntrain and test set for convenience, you may further split the train set\ninto a train and val set.\n\n*Annotator Split*\n\nWe split annotators into train and test set.\n\nIn other words, a different set of annotators would appear in train and test sets.\n\nWe split the data into train and test set for convenience, you may consider\nfurther splitting the train set into a train and val set for performance validation.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nThe data fields are the same among all splits.\nSee aforementioned information.", "### Data Splits\n\nSee aforementioned information.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ 136, 10, 120, 22, 279, 10, 4, 6, 6, 25, 13, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-sentiment-analysis #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-1K<n<200K #source_datasets-extended|other #language-English #license-unknown #tid8 #annotation disagreement #arxiv-2305.14663 #region-us \n# Dataset Card for \"TID-8\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: placeholder\n- Repository: \n- Paper: \n- Point of Contact:" ]
c95faec861cd34c0a5e11198e433767e20caf525
# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T05:28:09.073903](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b/blob/main/results_2023-10-16T05-28-09.073903.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.06354865771812081, "em_stderr": 0.002498247436471722, "f1": 0.14479865771812028, "f1_stderr": 0.002890194024794147, "acc": 0.3913161593683, "acc_stderr": 0.009083920481175163 }, "harness|drop|3": { "em": 0.06354865771812081, "em_stderr": 0.002498247436471722, "f1": 0.14479865771812028, "f1_stderr": 0.002890194024794147 }, "harness|gsm8k|5": { "acc": 0.04624715693707354, "acc_stderr": 0.005784991662691864 }, "harness|winogrande|5": { "acc": 0.7363851617995264, "acc_stderr": 0.01238284929965846 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b
[ "region:us" ]
2023-10-16T04:28:13+00:00
{"pretty_name": "Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T05:28:09.073903](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b/blob/main/results_2023-10-16T05-28-09.073903.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06354865771812081,\n \"em_stderr\": 0.002498247436471722,\n \"f1\": 0.14479865771812028,\n \"f1_stderr\": 0.002890194024794147,\n \"acc\": 0.3913161593683,\n \"acc_stderr\": 0.009083920481175163\n },\n \"harness|drop|3\": {\n \"em\": 0.06354865771812081,\n \"em_stderr\": 0.002498247436471722,\n \"f1\": 0.14479865771812028,\n \"f1_stderr\": 0.002890194024794147\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04624715693707354,\n \"acc_stderr\": 0.005784991662691864\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7363851617995264,\n \"acc_stderr\": 0.01238284929965846\n }\n}\n```", "repo_url": "https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T05_28_09.073903", "path": ["**/details_harness|drop|3_2023-10-16T05-28-09.073903.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T05-28-09.073903.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T05_28_09.073903", "path": ["**/details_harness|gsm8k|5_2023-10-16T05-28-09.073903.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T05-28-09.073903.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T05_28_09.073903", "path": ["**/details_harness|winogrande|5_2023-10-16T05-28-09.073903.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T05-28-09.073903.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T05_28_09.073903", "path": ["results_2023-10-16T05-28-09.073903.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T05-28-09.073903.parquet"]}]}]}
2023-10-16T04:28:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model The-Face-Of-Goonery/Huginn-v3-13b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T05:28:09.073903(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model The-Face-Of-Goonery/Huginn-v3-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T05:28:09.073903(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model The-Face-Of-Goonery/Huginn-v3-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T05:28:09.073903(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 27, 31, 175, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model The-Face-Of-Goonery/Huginn-v3-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T05:28:09.073903(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
b32559337bdead68216b834ee1b77b7d8b768a83
# Dataset Card for "uva-human-val-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saahith/EMSContExt_audio
[ "region:us" ]
2023-10-16T04:47:47+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcript", "dtype": "string"}, {"name": "duration", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 90269560.0, "num_examples": 109}], "download_size": 89515897, "dataset_size": 90269560.0}}
2023-10-16T04:47:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "uva-human-val-v1" More Information needed
[ "# Dataset Card for \"uva-human-val-v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"uva-human-val-v1\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"uva-human-val-v1\"\n\nMore Information needed" ]
a693f0ed79367028b5478fc961d1c20825c41a0a
# Dataset Card for "passage_rationale" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fiveflow/passage_rationale
[ "region:us" ]
2023-10-16T05:00:22+00:00
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 269246, "num_examples": 47}], "download_size": 87592, "dataset_size": 269246}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-17T01:14:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "passage_rationale" More Information needed
[ "# Dataset Card for \"passage_rationale\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"passage_rationale\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"passage_rationale\"\n\nMore Information needed" ]
57d29939a218a4f1ff48b274b3fc3b49c2c564c9
# Dataset Card for "cpgqa_processed-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pbaoo2705/cpgqa_processed-2
[ "region:us" ]
2023-10-16T05:02:38+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9148601, "num_examples": 884}], "download_size": 190231, "dataset_size": 9148601}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T05:02:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cpgqa_processed-2" More Information needed
[ "# Dataset Card for \"cpgqa_processed-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cpgqa_processed-2\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cpgqa_processed-2\"\n\nMore Information needed" ]
2b4a7fb803ed40206853e074934897f26b6dcea4
# Dataset Card for "cpgqa_processed_eval-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pbaoo2705/cpgqa_processed_eval-2
[ "region:us" ]
2023-10-16T05:02:40+00:00
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "context", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 1247857, "num_examples": 109}], "download_size": 48016, "dataset_size": 1247857}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2023-10-16T09:29:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cpgqa_processed_eval-2" More Information needed
[ "# Dataset Card for \"cpgqa_processed_eval-2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cpgqa_processed_eval-2\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cpgqa_processed_eval-2\"\n\nMore Information needed" ]
533a4fbaba69a91ae2a9776328c90021ac57df92
The pre-training dataset of paper "G-SPEED: General SParse Efficient Editing MoDel". Visit https://github.com/Banner-Z/G-SPEED.git for more details.
zhk/wiki-edits
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2023-10-16T05:17:59+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]}
2023-10-16T06:22:11+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
The pre-training dataset of paper "G-SPEED: General SParse Efficient Editing MoDel". Visit URL for more details.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n" ]
[ 41 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n" ]
01688dfa99a42dc036fd15ee401184be30c144b8
# Dataset Card for "laion2B-en-aesthetic-seed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liangyuch/laion2B-en-aesthetic-seed
[ "region:us" ]
2023-10-16T05:46:39+00:00
{"dataset_info": {"features": [{"name": "URL", "dtype": "string"}, {"name": "TEXT", "dtype": "string"}, {"name": "WIDTH", "dtype": "float64"}, {"name": "HEIGHT", "dtype": "float64"}, {"name": "similarity", "dtype": "float64"}, {"name": "hash", "dtype": "int64"}, {"name": "punsafe", "dtype": "float32"}, {"name": "pwatermark", "dtype": "float32"}, {"name": "aesthetic", "dtype": "float32"}, {"name": "SEED", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3164015506, "num_examples": 6435280}], "download_size": 1545264197, "dataset_size": 3164015506}}
2023-10-16T05:49:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "laion2B-en-aesthetic-seed" More Information needed
[ "# Dataset Card for \"laion2B-en-aesthetic-seed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"laion2B-en-aesthetic-seed\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"laion2B-en-aesthetic-seed\"\n\nMore Information needed" ]
380148ed13168484698658a0f916d10f9d200136
DevOps-Eval is a comprehensive chinese evaluation suite specifically designed for foundation models in the DevOps field. It consists of 5977 multi-choice questions spanning 55 diverse categories. Please visit our website and [GitHub](https://github.com/codefuse-ai/codefuse-devops-eval) for more details. Each category consists of two splits: dev, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. And the test set is for model evaluation. Labels on the test split are released, users can evaluate their results and automatically obtain test accuracy. [How to evaluate](https://github.com/codefuse-ai/codefuse-devops-eval#-how-to-evaluate)? ### Load the data ``` python from datasets import load_dataset dataset=load_dataset(r"devopseval-exam",name="UnitTesting") print(dataset['val'][0]) # {"id": 1, "question": "单元测试应该覆盖以下哪些方面?", "A": "正常路径", "B": "异常路径", "C": "边界值条件","D": 所有以上,"answer": "D", "explanation": ""} ``` ``` #### Notes More details on loading and using the data are at our github page [github](https://github.com/codefuse-ai/codefuse-devops-eval) page.
codefuse-ai/CodeFuse-DevOps-Eval
[ "task_categories:question-answering", "task_categories:multiple-choice", "size_categories:n<1K", "language:en", "language:zh", "license:mit", "devops", "aiops", "llm", "region:us" ]
2023-10-16T05:52:24+00:00
{"language": ["en", "zh"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["question-answering", "multiple-choice"], "pretty_name": "DevOps-Eval", "tags": ["devops", "aiops", "llm"]}
2023-11-30T11:08:21+00:00
[]
[ "en", "zh" ]
TAGS #task_categories-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-English #language-Chinese #license-mit #devops #aiops #llm #region-us
DevOps-Eval is a comprehensive chinese evaluation suite specifically designed for foundation models in the DevOps field. It consists of 5977 multi-choice questions spanning 55 diverse categories. Please visit our website and GitHub for more details. Each category consists of two splits: dev, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. And the test set is for model evaluation. Labels on the test split are released, users can evaluate their results and automatically obtain test accuracy. How to evaluate? ### Load the data ''' #### Notes More details on loading and using the data are at our github page github page.
[ "### Load the data\n\n'''", "#### Notes\nMore details on loading and using the data are at our github page github page." ]
[ "TAGS\n#task_categories-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-English #language-Chinese #license-mit #devops #aiops #llm #region-us \n", "### Load the data\n\n'''", "#### Notes\nMore details on loading and using the data are at our github page github page." ]
[ 63, 8, 23 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-multiple-choice #size_categories-n<1K #language-English #language-Chinese #license-mit #devops #aiops #llm #region-us \n### Load the data\n\n'''#### Notes\nMore details on loading and using the data are at our github page github page." ]
8e53cb324c7feec2b3889140c428720d8a834831
# Dataset Card for "poetry-chinese-zhtw" ## 資料集摘要 中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。 - **五代十國**- 收錄"花間集"與"南唐二主詞" - **唐**- 收錄"全唐詩"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。 - **宋**- 收錄"全宋詞"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。 - **元**- 收錄元曲 11,057 篇,曲家 233 人。 - **清**- 收錄"納蘭性德詩集" 原始資料來源: - [chinese-poetry: 最全中文诗歌古典文集数据库](https://github.com/chinese-poetry/chinese-poetry/tree/master) ## 資料下載清理 1. 下載 [chinese-poetry: 最全中文诗歌古典文集数据库](https://github.com/chinese-poetry/chinese-poetry/tree/master) 的 Repo 2. 調整資料呈現結構便於模型訓練 3. 使用 OpenCC 來進行簡繁轉換 4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub ## 資料集結構 ```json { "author":"杜甫", "title":"月", "text":"天上秋期近,人間月影清。入河蟾不沒,搗藥兔長生。只益丹心苦,能添白髮明。干戈知滿地,休照國西營。", "category":"唐" } ``` ## 資料欄位 - `author`: (string) 作者 - `title`: (string) 作品名稱 - `text`: (string) 文章內容 - `category`: (string) 作品的朝代 ## 如何使用 ```python from datasets import load_dataset dataset = load_dataset("erhwenkuo/poetry-chinese-zhtw", split="train") ``` ## 許可資訊 [MIT](https://zh.wikipedia.org/zh-tw/MIT%E8%A8%B1%E5%8F%AF%E8%AD%89)
erhwenkuo/poetry-chinese-zhtw
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:zh", "license:mit", "region:us" ]
2023-10-16T06:16:02+00:00
{"language": ["zh"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "author", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19839577, "num_examples": 76013}], "download_size": 15009797, "dataset_size": 19839577}}
2023-10-16T07:16:59+00:00
[]
[ "zh" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-mit #region-us
# Dataset Card for "poetry-chinese-zhtw" ## 資料集摘要 中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。 - 五代十國- 收錄"花間集"與"南唐二主詞" - 唐- 收錄"全唐詩"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。 - 宋- 收錄"全宋詞"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。 - 元- 收錄元曲 11,057 篇,曲家 233 人。 - 清- 收錄"納蘭性德詩集" 原始資料來源: - chinese-poetry: 最全中文诗歌古典文集数据库 ## 資料下載清理 1. 下載 chinese-poetry: 最全中文诗歌古典文集数据库 的 Repo 2. 調整資料呈現結構便於模型訓練 3. 使用 OpenCC 來進行簡繁轉換 4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub ## 資料集結構 ## 資料欄位 - 'author': (string) 作者 - 'title': (string) 作品名稱 - 'text': (string) 文章內容 - 'category': (string) 作品的朝代 ## 如何使用 ## 許可資訊 MIT
[ "# Dataset Card for \"poetry-chinese-zhtw\"", "## 資料集摘要\n\n中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。\n\n- 五代十國- 收錄\"花間集\"與\"南唐二主詞\"\n- 唐- 收錄\"全唐詩\"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。\n- 宋- 收錄\"全宋詞\"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。\n- 元- 收錄元曲 11,057 篇,曲家 233 人。\n- 清- 收錄\"納蘭性德詩集\"\n\n原始資料來源:\n- chinese-poetry: 最全中文诗歌古典文集数据库", "## 資料下載清理\n\n1. 下載 chinese-poetry: 最全中文诗歌古典文集数据库 的 Repo\n2. 調整資料呈現結構便於模型訓練\n3. 使用 OpenCC 來進行簡繁轉換\n4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub", "## 資料集結構", "## 資料欄位\n\n- 'author': (string) 作者\n- 'title': (string) 作品名稱\n- 'text': (string) 文章內容\n- 'category': (string) 作品的朝代", "## 如何使用", "## 許可資訊\n\nMIT" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-mit #region-us \n", "# Dataset Card for \"poetry-chinese-zhtw\"", "## 資料集摘要\n\n中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。\n\n- 五代十國- 收錄\"花間集\"與\"南唐二主詞\"\n- 唐- 收錄\"全唐詩\"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。\n- 宋- 收錄\"全宋詞\"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。\n- 元- 收錄元曲 11,057 篇,曲家 233 人。\n- 清- 收錄\"納蘭性德詩集\"\n\n原始資料來源:\n- chinese-poetry: 最全中文诗歌古典文集数据库", "## 資料下載清理\n\n1. 下載 chinese-poetry: 最全中文诗歌古典文集数据库 的 Repo\n2. 調整資料呈現結構便於模型訓練\n3. 使用 OpenCC 來進行簡繁轉換\n4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub", "## 資料集結構", "## 資料欄位\n\n- 'author': (string) 作者\n- 'title': (string) 作品名稱\n- 'text': (string) 文章內容\n- 'category': (string) 作品的朝代", "## 如何使用", "## 許可資訊\n\nMIT" ]
[ 39, 16, 240, 67, 5, 54, 4, 5 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Chinese #license-mit #region-us \n# Dataset Card for \"poetry-chinese-zhtw\"## 資料集摘要\n\n中文古典文集資料庫收集了約 5.5 萬首唐詩、26 萬首宋詩、2.1 萬首宋詞和其他古典文集。詩人包括唐宋兩朝近 1.4 萬古詩人,和兩宋時期 1.5 千古詞人。\n\n- 五代十國- 收錄\"花間集\"與\"南唐二主詞\"\n- 唐- 收錄\"全唐詩\"(是清康熙四十四年,康熙皇帝主導下,蒐集羅唐詩的收藏「得詩 48,900 餘首,詩入 2,200 人」)。\n- 宋- 收錄\"全宋詞\"(由唐圭璋編著,孔凡禮補輯,共收錄宋代詞人 1,330 家,詞作 21,116 首)。\n- 元- 收錄元曲 11,057 篇,曲家 233 人。\n- 清- 收錄\"納蘭性德詩集\"\n\n原始資料來源:\n- chinese-poetry: 最全中文诗歌古典文集数据库## 資料下載清理\n\n1. 下載 chinese-poetry: 最全中文诗歌古典文集数据库 的 Repo\n2. 調整資料呈現結構便於模型訓練\n3. 使用 OpenCC 來進行簡繁轉換\n4. 使用 Huggingface Datasets 來上傳至 Huggingface Hub## 資料集結構## 資料欄位\n\n- 'author': (string) 作者\n- 'title': (string) 作品名稱\n- 'text': (string) 文章內容\n- 'category': (string) 作品的朝代## 如何使用## 許可資訊\n\nMIT" ]
1a6643ac1e13d37376d48ae7ed2fec2f866ac9bb
# Dataset Card for "L1_tabular_data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sankettgorey/L1_tabular_data
[ "region:us" ]
2023-10-16T06:41:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 146553128.0, "num_examples": 560}, {"name": "test", "num_bytes": 18313783.5, "num_examples": 70}, {"name": "validation", "num_bytes": 18343643.5, "num_examples": 70}], "download_size": 152684335, "dataset_size": 183210555.0}}
2023-10-16T06:41:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "L1_tabular_data" More Information needed
[ "# Dataset Card for \"L1_tabular_data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"L1_tabular_data\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"L1_tabular_data\"\n\nMore Information needed" ]
651db6c33febab9eaf1255a67d9d7bbc9d48c66c
# Dataset Card for "cc100-yue" The Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others. ## Filtering Process The filtering process is according to article [Building a Hong Kongese Language Identifier](https://medium.com/@kyubi_fox/building-a-hong-kongese-language-identifier-5e20fd221323) by ToastyNews
indiejoseph/cc100-yue
[ "region:us" ]
2023-10-16T06:46:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32135136, "num_examples": 176047}], "download_size": 23579906, "dataset_size": 32135136}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-17T18:40:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cc100-yue" The Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others. ## Filtering Process The filtering process is according to article Building a Hong Kongese Language Identifier by ToastyNews
[ "# Dataset Card for \"cc100-yue\"\n\nThe Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others.", "## Filtering Process\n\nThe filtering process is according to article Building a Hong Kongese Language Identifier by ToastyNews" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cc100-yue\"\n\nThe Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others.", "## Filtering Process\n\nThe filtering process is according to article Building a Hong Kongese Language Identifier by ToastyNews" ]
[ 6, 78, 25 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cc100-yue\"\n\nThe Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others.## Filtering Process\n\nThe filtering process is according to article Building a Hong Kongese Language Identifier by ToastyNews" ]
f9722e2e4e38423ef0842844f14ca1109874fa22
# Dataset Card for "guanaco-llama2-1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
waveww/guanaco-llama2-1k
[ "region:us" ]
2023-10-16T07:30:09+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 966693, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T07:30:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "guanaco-llama2-1k" More Information needed
[ "# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed" ]
4648ebfb58cfe9c3cd36914cce7816e2d3a4c79f
**Supervisory Fine-Tuning Dataset (SFT and RLHF)** - Dataset Name: medical_finetune_tw.json - Description: This dataset comprises a total of 2.06 million entries and is sourced from various sources, including: 1. Six medical department medical inquiry datasets from the [Chinese Medical Dialogue Dataset](https://github.com/Toyhom/Chinese-medical-dialogue-data), totaling 790,000 entries. 2. An online medical encyclopedia dataset, [huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa), with 360,000 entries. 3. A medical knowledge graph dataset, [huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa), with 790,000 entries. These three parts are merged, resulting in a dataset with a total of 1.95 million entries. 4. English medical inquiry dialogue data from [Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor), which includes data from HealthCareMagic-100k and GenMedGPT-5k datasets, totaling 110,000 entries.
DavidLanz/medical_instruction
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:zh", "language:en", "license:apache-2.0", "text-generation", "region:us" ]
2023-10-16T07:32:05+00:00
{"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "medical", "tags": ["text-generation"]}
2023-10-16T07:41:48+00:00
[]
[ "zh", "en" ]
TAGS #task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us
Supervisory Fine-Tuning Dataset (SFT and RLHF) - Dataset Name: medical_finetune_tw.json - Description: This dataset comprises a total of 2.06 million entries and is sourced from various sources, including: 1. Six medical department medical inquiry datasets from the Chinese Medical Dialogue Dataset, totaling 790,000 entries. 2. An online medical encyclopedia dataset, huatuo_encyclopedia_qa, with 360,000 entries. 3. A medical knowledge graph dataset, huatuo_knowledge_graph_qa, with 790,000 entries. These three parts are merged, resulting in a dataset with a total of 1.95 million entries. 4. English medical inquiry dialogue data from Kent0n-Li/ChatDoctor, which includes data from HealthCareMagic-100k and GenMedGPT-5k datasets, totaling 110,000 entries.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Chinese #language-English #license-apache-2.0 #text-generation #region-us \n" ]
64d7f23231aa46e8b26714f47888982c884c162d
Download Files for pre-computed embedding.
binwang/InductivE-embeddings
[ "license:mit", "region:us" ]
2023-10-16T07:40:15+00:00
{"license": "mit"}
2023-10-17T02:00:18+00:00
[]
[]
TAGS #license-mit #region-us
Download Files for pre-computed embedding.
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
fbcbb6285811b9114d1d57a75ea6d72d9f82ccdb
# Dataset Card for "hand_vs_machine" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
dreeven/hand_vs_machine
[ "region:us" ]
2023-10-16T08:00:31+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "hand", "1": "machine"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 155717.0, "num_examples": 2}, {"name": "train", "num_bytes": 153606.0, "num_examples": 18}], "download_size": 297352, "dataset_size": 309323.0}}
2023-10-16T08:02:06+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hand_vs_machine" More Information needed
[ "# Dataset Card for \"hand_vs_machine\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hand_vs_machine\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hand_vs_machine\"\n\nMore Information needed" ]
76884f4573134885168812ddf80ae28d586c1211
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Zollo757347/adl_hw1_dataset
[ "region:us" ]
2023-10-16T08:26:18+00:00
{}
2023-10-17T11:11:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
7a596bff543bb754248e22a7d56fd0470aa1de54
# Dataset Card for "medical-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thangvip/medical-data
[ "region:us" ]
2023-10-16T08:29:24+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9331012, "num_examples": 603}], "download_size": 4263217, "dataset_size": 9331012}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T08:29:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "medical-data" More Information needed
[ "# Dataset Card for \"medical-data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"medical-data\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"medical-data\"\n\nMore Information needed" ]
a262334eb3d0dd4edd25d108b63ec29c9841c943
# Dataset Card for Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/Lajonbot/WizardLM-13B-V1.2-PL-lora_unload - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [Lajonbot/WizardLM-13B-V1.2-PL-lora_unload](https://huggingface.co/Lajonbot/WizardLM-13B-V1.2-PL-lora_unload) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Lajonbot__WizardLM-13B-V1.2-PL-lora_unload", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T09:37:24.771314](https://huggingface.co/datasets/open-llm-leaderboard/details_Lajonbot__WizardLM-13B-V1.2-PL-lora_unload/blob/main/results_2023-10-16T09-37-24.771314.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.003984899328859061, "em_stderr": 0.0006451805848102423, "f1": 0.06672923657718131, "f1_stderr": 0.0015525464124355034, "acc": 0.41089372554487175, "acc_stderr": 0.010708286080716344 }, "harness|drop|3": { "em": 0.003984899328859061, "em_stderr": 0.0006451805848102423, "f1": 0.06672923657718131, "f1_stderr": 0.0015525464124355034 }, "harness|gsm8k|5": { "acc": 0.11144806671721001, "acc_stderr": 0.008668021353794427 }, "harness|winogrande|5": { "acc": 0.7103393843725335, "acc_stderr": 0.012748550807638263 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_Lajonbot__WizardLM-13B-V1.2-PL-lora_unload
[ "region:us" ]
2023-10-16T08:37:28+00:00
{"pretty_name": "Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload", "dataset_summary": "Dataset automatically created during the evaluation run of model [Lajonbot/WizardLM-13B-V1.2-PL-lora_unload](https://huggingface.co/Lajonbot/WizardLM-13B-V1.2-PL-lora_unload) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Lajonbot__WizardLM-13B-V1.2-PL-lora_unload\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T09:37:24.771314](https://huggingface.co/datasets/open-llm-leaderboard/details_Lajonbot__WizardLM-13B-V1.2-PL-lora_unload/blob/main/results_2023-10-16T09-37-24.771314.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102423,\n \"f1\": 0.06672923657718131,\n \"f1_stderr\": 0.0015525464124355034,\n \"acc\": 0.41089372554487175,\n \"acc_stderr\": 0.010708286080716344\n },\n \"harness|drop|3\": {\n \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102423,\n \"f1\": 0.06672923657718131,\n \"f1_stderr\": 0.0015525464124355034\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11144806671721001,\n \"acc_stderr\": 0.008668021353794427\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7103393843725335,\n \"acc_stderr\": 0.012748550807638263\n }\n}\n```", "repo_url": "https://huggingface.co/Lajonbot/WizardLM-13B-V1.2-PL-lora_unload", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T09_37_24.771314", "path": ["**/details_harness|drop|3_2023-10-16T09-37-24.771314.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T09-37-24.771314.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T09_37_24.771314", "path": ["**/details_harness|gsm8k|5_2023-10-16T09-37-24.771314.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T09-37-24.771314.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T09_37_24.771314", "path": ["**/details_harness|winogrande|5_2023-10-16T09-37-24.771314.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T09-37-24.771314.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T09_37_24.771314", "path": ["results_2023-10-16T09-37-24.771314.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T09-37-24.771314.parquet"]}]}]}
2023-10-16T08:37:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model Lajonbot/WizardLM-13B-V1.2-PL-lora_unload on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T09:37:24.771314(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lajonbot/WizardLM-13B-V1.2-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T09:37:24.771314(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lajonbot/WizardLM-13B-V1.2-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T09:37:24.771314(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 30, 31, 178, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Lajonbot/WizardLM-13B-V1.2-PL-lora_unload## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Lajonbot/WizardLM-13B-V1.2-PL-lora_unload on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T09:37:24.771314(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
dc4313b87ebfe501d5334ee14a648e9b6c0d10e3
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_200" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jjonhwa/SECOND_KOWIKI_RETRIEVE_200
[ "region:us" ]
2023-10-16T08:48:15+00:00
{"dataset_info": {"features": [{"name": "ctxs", "list": [{"name": "score", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 135949554, "num_examples": 15504}], "download_size": 73942447, "dataset_size": 135949554}}
2023-10-17T00:46:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SECOND_KOWIKI_RETRIEVE_200" More Information needed
[ "# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_200\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_200\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SECOND_KOWIKI_RETRIEVE_200\"\n\nMore Information needed" ]
aef002daaa10d151457958a28b162fc996825023
This repo contains the data for our paper "SOUL: Towards Sentiment and Opinion Understanding of Language" in EMNLP 2023. [Github repo](https://github.com/DAMO-NLP-SG/SOUL/tree/main) ## Statistics The SOUL dataset comprises 15,028 statements related to 3,638 reviews, resulting in an average of 4.13 statements per review. To create training, development, and test sets, we split the reviews in a ratio of 6:1:3, respectively. | Split | # reviews | # statements | True | False | Not-given | Total | |-------|-----------|--------------|------|-------|-----------|-------| | Train | 2,182 | 3,675 | 2,159| 8,834 | 3,000 | 8,834 | | Dev | 365 | 617 | 503 | 1,481 | 361 | 1,481 | | Test | 1,091 | 1,956 | 1,664| 4,713 | 1,093 | 4,713 | | Total | 3,638 | 6,248 | 3,613|15,028 | 5,167 |15,028 |
DAMO-NLP-SG/SOUL
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:10K<n<100K", "language:en", "license:mit", "region:us" ]
2023-10-16T08:53:00+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "text-generation"]}
2023-10-16T10:55:43+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us
This repo contains the data for our paper "SOUL: Towards Sentiment and Opinion Understanding of Language" in EMNLP 2023. Github repo Statistics ---------- The SOUL dataset comprises 15,028 statements related to 3,638 reviews, resulting in an average of 4.13 statements per review. To create training, development, and test sets, we split the reviews in a ratio of 6:1:3, respectively.
[]
[ "TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n" ]
[ 49 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n" ]
e27b67d89e0477837e92f7fac1cd1ca96c6ad1fd
# Dataset Card for SemEval 2024 Task 2 ## Dataset Description - **Homepage:** https://sites.google.com/view/nli4ct/semeval-2024?authuser=0 - **Pubmed:** False - **Public:** True - **Tasks:** TE ## Dataset (Description copied from dataset homepage) The statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes. ## Description Each Clinical Trial Report (CTR) consists of 4 sections: Eligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial Intervention - Information concerning the type, dosage, frequency, and duration of treatments being studied. Results - Number of participants in the trial, outcome measures, units, and the results. Adverse events - These are signs and symptoms observed in patients during the clinical trial. For this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics. ## Citation Information ``` @article{, author = {}, title = {}, journal = {}, volume = {}, year = {}, url = {}, doi = {}, biburl = {}, bibsource = {} }
bigbio/sem_eval_2024_task_2
[ "multilinguality:monolingual", "language:en", "region:us" ]
2023-10-16T08:54:10+00:00
{"language": ["en"], "multilinguality": "monolingual", "pretty_name": "SemEval 2024 Task 2", "bigbio_language": ["English"], "homepage": "https://allenai.org/data/scitail", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]}
2024-01-15T20:57:10+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #region-us
# Dataset Card for SemEval 2024 Task 2 ## Dataset Description - Homepage: URL - Pubmed: False - Public: True - Tasks: TE ## Dataset (Description copied from dataset homepage) The statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes. ## Description Each Clinical Trial Report (CTR) consists of 4 sections: Eligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial Intervention - Information concerning the type, dosage, frequency, and duration of treatments being studied. Results - Number of participants in the trial, outcome measures, units, and the results. Adverse events - These are signs and symptoms observed in patients during the clinical trial. For this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics. ''' @article{, author = {}, title = {}, journal = {}, volume = {}, year = {}, url = {}, doi = {}, biburl = {}, bibsource = {} }
[ "# Dataset Card for SemEval 2024 Task 2", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TE", "## Dataset\n(Description copied from dataset homepage)\n\nThe statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes.", "## Description\nEach Clinical Trial Report (CTR) consists of 4 sections:\nEligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial\nIntervention - Information concerning the type, dosage, frequency, and duration of treatments being studied.\nResults - Number of participants in the trial, outcome measures, units, and the results.\nAdverse events - These are signs and symptoms observed in patients during the clinical trial.\nFor this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics.\n\n\n\n\n'''\n@article{,\n author = {},\n title = {},\n journal = {},\n volume = {},\n year = {},\n url = {},\n doi = {},\n biburl = {},\n bibsource = {}\n}" ]
[ "TAGS\n#multilinguality-monolingual #language-English #region-us \n", "# Dataset Card for SemEval 2024 Task 2", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TE", "## Dataset\n(Description copied from dataset homepage)\n\nThe statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes.", "## Description\nEach Clinical Trial Report (CTR) consists of 4 sections:\nEligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial\nIntervention - Information concerning the type, dosage, frequency, and duration of treatments being studied.\nResults - Number of participants in the trial, outcome measures, units, and the results.\nAdverse events - These are signs and symptoms observed in patients during the clinical trial.\nFor this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics.\n\n\n\n\n'''\n@article{,\n author = {},\n title = {},\n journal = {},\n volume = {},\n year = {},\n url = {},\n doi = {},\n biburl = {},\n bibsource = {}\n}" ]
[ 18, 13, 23, 78, 202 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #region-us \n# Dataset Card for SemEval 2024 Task 2## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TE## Dataset\n(Description copied from dataset homepage)\n\nThe statements and evidence are generated by clinical domain experts, clinical trial organisers, and research oncologists from the Cancer Research UK Manchester Institute and the Digital Experimental Cancer Medicine Team. There are a total of (TBD) statements split evenly across the different sections and classes.## Description\nEach Clinical Trial Report (CTR) consists of 4 sections:\nEligibility criteria - A set of conditions for patients to be allowed to take part in the clinical trial\nIntervention - Information concerning the type, dosage, frequency, and duration of treatments being studied.\nResults - Number of participants in the trial, outcome measures, units, and the results.\nAdverse events - These are signs and symptoms observed in patients during the clinical trial.\nFor this task, each CTR may contain 1-2 patient groups, called cohorts or arms. These groups may receive different treatments, or have different baseline characteristics.\n\n\n\n\n'''\n@article{,\n author = {},\n title = {},\n journal = {},\n volume = {},\n year = {},\n url = {},\n doi = {},\n biburl = {},\n bibsource = {}\n}" ]
0cd599f4498d1663f2ced509e6020e37e18e8d13
# Dataset Card for "19128c17" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/19128c17
[ "region:us" ]
2023-10-16T08:54:48+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 188, "num_examples": 10}], "download_size": 1339, "dataset_size": 188}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T08:54:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "19128c17" More Information needed
[ "# Dataset Card for \"19128c17\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"19128c17\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"19128c17\"\n\nMore Information needed" ]
74294296df9182ab2923d42367e040742e7f9059
# Dataset Card for "inferno_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pphildan/inferno_train3
[ "region:us" ]
2023-10-16T09:07:55+00:00
{"dataset_info": {"features": [{"name": "data", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "target", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 1267120, "num_examples": 10}], "download_size": 1000243, "dataset_size": 1267120}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T14:07:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "inferno_train" More Information needed
[ "# Dataset Card for \"inferno_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"inferno_train\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"inferno_train\"\n\nMore Information needed" ]
d7742c847663f224c33b2c91d3c5b0d6c20698fa
# 六万条农业微调数据
haonanqqq/AgriSFT_60k
[ "region:us" ]
2023-10-16T09:18:42+00:00
{}
2023-10-16T09:25:09+00:00
[]
[]
TAGS #region-us
# 六万条农业微调数据
[ "# 六万条农业微调数据" ]
[ "TAGS\n#region-us \n", "# 六万条农业微调数据" ]
[ 6, 9 ]
[ "passage: TAGS\n#region-us \n# 六万条农业微调数据" ]
e252cf78ced8a0ea5f62cfd591784cdbbddbac8a
# AgentInstruct Dataset <p align="center"> 🤗 <a href="https://huggingface.co/THUDM/agentlm-70b" target="_blank">[Models]</a> • 💻 <a href="https://github.com/THUDM/AgentTuning" target="_blank">[Github Repo]</a> • 📌 <a href="https://THUDM.github.io/AgentTuning/" target="_blank">[Project Page]</a> • 📃 <a href="https://arxiv.org/abs/2310.12823" target="_blank">[Paper]</a> </p> **AgentInstruct** is a meticulously curated dataset featuring **1,866** high-quality interactions, designed to enhance AI agents across six diverse real-world tasks, leveraging innovative methods like **Task Derivation** and **Self-Instruct**. - 🔍 **CoT** - Harness the power of [ReAct](https://react-lm.github.io/), offering detailed thought explanations for each action, ensuring an intricate understanding of the model's decision-making journey. - 🌍 **Diversity** - Spanning 6 real-world scenarios, from Daily Household Routines to Database Operations, and their average turns range from 5 to 35. - 🎯 **Precision** - Not all trajectories of GPT-4 are effective! Ours are rigorously filtered using strict rewards to ensure top-notch quality. - ✅ **Assurance** - Rigorous checks to avoid data leakage, ensuring pristine dataset quality. ## Task Overview | Task | # Filt. Traj. | Avg # Filt. Traj. Turns | |---|---|---| |ALFWorld|336|13.52| |WebShop|351|3.68| |Mind2Web|122|1.00| |Knowledge Graph|324|6.04| |Operating System|195|3.85| |Database|538|2.06| |**AgentInstruct**|1866|5.24| AgentInstruct includes 1,866 trajectories from 6 agents tasks. "Traj." stands for interaction trajectory. "Filt. Traj." stands for filtered trajectories. ## Models **AgentLM** models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models. The models follow the conversation format of [Llama-2-chat](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), with system prompt fixed as ``` You are a helpful, respectful and honest assistant. ``` 7B, 13B, and 70B models are available on Huggingface model hub. |Model|Huggingface Repo| |---|---| |AgentLM-7B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-7b) | |AgentLM-13B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-13b) | |AgentLM-70B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-70b) | Check our [[Github Repo]](https://github.com/THUDM/AgentTuning) for details about **AgentTuning**. ## Citation If you find our work useful, please consider citing AgentTuning: ``` @misc{zeng2023agenttuning, title={AgentTuning: Enabling Generalized Agent Abilities for LLMs}, author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang}, year={2023}, eprint={2310.12823}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
THUDM/AgentInstruct
[ "language:en", "arxiv:2310.12823", "region:us" ]
2023-10-16T09:27:58+00:00
{"language": ["en"], "pretty_name": "AgentInstruct", "configs": [{"config_name": "default", "data_files": [{"split": "os", "path": "data/os-*"}, {"split": "db", "path": "data/db-*"}, {"split": "alfworld", "path": "data/alfworld-*"}, {"split": "webshop", "path": "data/webshop-*"}, {"split": "kg", "path": "data/kg-*"}, {"split": "mind2web", "path": "data/mind2web-*"}]}], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "loss", "dtype": "bool"}, {"name": "value", "dtype": "string"}]}, {"name": "id", "dtype": "string"}], "splits": [{"name": "os", "num_bytes": 660245, "num_examples": 195}, {"name": "db", "num_bytes": 1436655, "num_examples": 538}, {"name": "alfworld", "num_bytes": 1223363, "num_examples": 336}, {"name": "webshop", "num_bytes": 1602648, "num_examples": 351}, {"name": "kg", "num_bytes": 2960010, "num_examples": 324}, {"name": "mind2web", "num_bytes": 159590, "num_examples": 122}], "download_size": 1255385, "dataset_size": 8042511}}
2023-10-23T11:36:19+00:00
[ "2310.12823" ]
[ "en" ]
TAGS #language-English #arxiv-2310.12823 #region-us
AgentInstruct Dataset ===================== [[Models]](URL target=) • [[Github Repo]](URL target=) • [[Project Page]](URL target=) • [[Paper]](URL target=) AgentInstruct is a meticulously curated dataset featuring 1,866 high-quality interactions, designed to enhance AI agents across six diverse real-world tasks, leveraging innovative methods like Task Derivation and Self-Instruct. * CoT - Harness the power of ReAct, offering detailed thought explanations for each action, ensuring an intricate understanding of the model's decision-making journey. * Diversity - Spanning 6 real-world scenarios, from Daily Household Routines to Database Operations, and their average turns range from 5 to 35. * Precision - Not all trajectories of GPT-4 are effective! Ours are rigorously filtered using strict rewards to ensure top-notch quality. * Assurance - Rigorous checks to avoid data leakage, ensuring pristine dataset quality. Task Overview ------------- Task: ALFWorld, # Filt. Traj.: 336, Avg # Filt. Traj. Turns: 13.52 Task: WebShop, # Filt. Traj.: 351, Avg # Filt. Traj. Turns: 3.68 Task: Mind2Web, # Filt. Traj.: 122, Avg # Filt. Traj. Turns: 1.00 Task: Knowledge Graph, # Filt. Traj.: 324, Avg # Filt. Traj. Turns: 6.04 Task: Operating System, # Filt. Traj.: 195, Avg # Filt. Traj. Turns: 3.85 Task: Database, # Filt. Traj.: 538, Avg # Filt. Traj. Turns: 2.06 Task: AgentInstruct, # Filt. Traj.: 1866, Avg # Filt. Traj. Turns: 5.24 AgentInstruct includes 1,866 trajectories from 6 agents tasks. "Traj." stands for interaction trajectory. "Filt. Traj." stands for filtered trajectories. Models ------ AgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models. The models follow the conversation format of Llama-2-chat, with system prompt fixed as 7B, 13B, and 70B models are available on Huggingface model hub. Check our [[Github Repo]](URL for details about AgentTuning. If you find our work useful, please consider citing AgentTuning:
[ "# Filt. Traj.: 336, Avg # Filt. Traj. Turns: 13.52\nTask: WebShop, # Filt. Traj.: 351, Avg # Filt. Traj. Turns: 3.68\nTask: Mind2Web, # Filt. Traj.: 122, Avg # Filt. Traj. Turns: 1.00\nTask: Knowledge Graph, # Filt. Traj.: 324, Avg # Filt. Traj. Turns: 6.04\nTask: Operating System, # Filt. Traj.: 195, Avg # Filt. Traj. Turns: 3.85\nTask: Database, # Filt. Traj.: 538, Avg # Filt. Traj. Turns: 2.06\nTask: AgentInstruct, # Filt. Traj.: 1866, Avg # Filt. Traj. Turns: 5.24\n\n\nAgentInstruct includes 1,866 trajectories from\n6 agents tasks. \"Traj.\" stands for interaction trajectory. \"Filt. Traj.\"\nstands for filtered trajectories.\n\n\nModels\n------\n\n\nAgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.\n\n\nThe models follow the conversation format of Llama-2-chat, with system prompt fixed as\n\n\n7B, 13B, and 70B models are available on Huggingface model hub.\n\n\n\nCheck our [[Github Repo]](URL for details about AgentTuning.\n\n\nIf you find our work useful, please consider citing AgentTuning:" ]
[ "TAGS\n#language-English #arxiv-2310.12823 #region-us \n", "# Filt. Traj.: 336, Avg # Filt. Traj. Turns: 13.52\nTask: WebShop, # Filt. Traj.: 351, Avg # Filt. Traj. Turns: 3.68\nTask: Mind2Web, # Filt. Traj.: 122, Avg # Filt. Traj. Turns: 1.00\nTask: Knowledge Graph, # Filt. Traj.: 324, Avg # Filt. Traj. Turns: 6.04\nTask: Operating System, # Filt. Traj.: 195, Avg # Filt. Traj. Turns: 3.85\nTask: Database, # Filt. Traj.: 538, Avg # Filt. Traj. Turns: 2.06\nTask: AgentInstruct, # Filt. Traj.: 1866, Avg # Filt. Traj. Turns: 5.24\n\n\nAgentInstruct includes 1,866 trajectories from\n6 agents tasks. \"Traj.\" stands for interaction trajectory. \"Filt. Traj.\"\nstands for filtered trajectories.\n\n\nModels\n------\n\n\nAgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.\n\n\nThe models follow the conversation format of Llama-2-chat, with system prompt fixed as\n\n\n7B, 13B, and 70B models are available on Huggingface model hub.\n\n\n\nCheck our [[Github Repo]](URL for details about AgentTuning.\n\n\nIf you find our work useful, please consider citing AgentTuning:" ]
[ 19, 356 ]
[ "passage: TAGS\n#language-English #arxiv-2310.12823 #region-us \n# Filt. Traj.: 336, Avg # Filt. Traj. Turns: 13.52\nTask: WebShop, # Filt. Traj.: 351, Avg # Filt. Traj. Turns: 3.68\nTask: Mind2Web, # Filt. Traj.: 122, Avg # Filt. Traj. Turns: 1.00\nTask: Knowledge Graph, # Filt. Traj.: 324, Avg # Filt. Traj. Turns: 6.04\nTask: Operating System, # Filt. Traj.: 195, Avg # Filt. Traj. Turns: 3.85\nTask: Database, # Filt. Traj.: 538, Avg # Filt. Traj. Turns: 2.06\nTask: AgentInstruct, # Filt. Traj.: 1866, Avg # Filt. Traj. Turns: 5.24\n\n\nAgentInstruct includes 1,866 trajectories from\n6 agents tasks. \"Traj.\" stands for interaction trajectory. \"Filt. Traj.\"\nstands for filtered trajectories.\n\n\nModels\n------\n\n\nAgentLM models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.\n\n\nThe models follow the conversation format of Llama-2-chat, with system prompt fixed as\n\n\n7B, 13B, and 70B models are available on Huggingface model hub.\n\n\n\nCheck our [[Github Repo]](URL for details about AgentTuning.\n\n\nIf you find our work useful, please consider citing AgentTuning:" ]
d91a89e99efb9220e3df7542ddd448351d541a9c
# Digikala Dataset Small 5m - digikala product titles translated by standard google translate api - category and brand english translation might be invalid but title_en checked -
AliAsh/digikala_translated_small_5m
[ "size_categories:1B<n<10B", "language:fa", "region:us" ]
2023-10-16T09:46:46+00:00
{"language": ["fa"], "size_categories": ["1B<n<10B"], "pretty_name": "digikala-5m"}
2023-10-16T10:32:22+00:00
[]
[ "fa" ]
TAGS #size_categories-1B<n<10B #language-Persian #region-us
# Digikala Dataset Small 5m - digikala product titles translated by standard google translate api - category and brand english translation might be invalid but title_en checked -
[ "# Digikala Dataset Small 5m\n\n- digikala product titles translated by standard google translate api\n- category and brand english translation might be invalid but title_en checked\n-" ]
[ "TAGS\n#size_categories-1B<n<10B #language-Persian #region-us \n", "# Digikala Dataset Small 5m\n\n- digikala product titles translated by standard google translate api\n- category and brand english translation might be invalid but title_en checked\n-" ]
[ 23, 39 ]
[ "passage: TAGS\n#size_categories-1B<n<10B #language-Persian #region-us \n# Digikala Dataset Small 5m\n\n- digikala product titles translated by standard google translate api\n- category and brand english translation might be invalid but title_en checked\n-" ]
d625011f02e98ae67b383baeb3b81b750ea5ae7d
# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/microsoft/CodeGPT-small-py - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_microsoft__CodeGPT-small-py", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-16T10:55:21.745604](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__CodeGPT-small-py/blob/main/results_2023-10-16T10-55-21.745604.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.00388003355704698, "em_stderr": 0.0006366682825519956, "f1": 0.016416736577181235, "f1_stderr": 0.0008900949322041355, "acc": 0.24388318863456984, "acc_stderr": 0.007024139410202808 }, "harness|drop|3": { "em": 0.00388003355704698, "em_stderr": 0.0006366682825519956, "f1": 0.016416736577181235, "f1_stderr": 0.0008900949322041355 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 }, "harness|winogrande|5": { "acc": 0.4877663772691397, "acc_stderr": 0.014048278820405616 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_microsoft__CodeGPT-small-py
[ "region:us" ]
2023-10-16T09:55:24+00:00
{"pretty_name": "Evaluation run of microsoft/CodeGPT-small-py", "dataset_summary": "Dataset automatically created during the evaluation run of model [microsoft/CodeGPT-small-py](https://huggingface.co/microsoft/CodeGPT-small-py) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_microsoft__CodeGPT-small-py\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T10:55:21.745604](https://huggingface.co/datasets/open-llm-leaderboard/details_microsoft__CodeGPT-small-py/blob/main/results_2023-10-16T10-55-21.745604.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00388003355704698,\n \"em_stderr\": 0.0006366682825519956,\n \"f1\": 0.016416736577181235,\n \"f1_stderr\": 0.0008900949322041355,\n \"acc\": 0.24388318863456984,\n \"acc_stderr\": 0.007024139410202808\n },\n \"harness|drop|3\": {\n \"em\": 0.00388003355704698,\n \"em_stderr\": 0.0006366682825519956,\n \"f1\": 0.016416736577181235,\n \"f1_stderr\": 0.0008900949322041355\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4877663772691397,\n \"acc_stderr\": 0.014048278820405616\n }\n}\n```", "repo_url": "https://huggingface.co/microsoft/CodeGPT-small-py", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T10_55_21.745604", "path": ["**/details_harness|drop|3_2023-10-16T10-55-21.745604.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T10-55-21.745604.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T10_55_21.745604", "path": ["**/details_harness|gsm8k|5_2023-10-16T10-55-21.745604.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T10-55-21.745604.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T10_55_21.745604", "path": ["**/details_harness|winogrande|5_2023-10-16T10-55-21.745604.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T10-55-21.745604.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T10_55_21.745604", "path": ["results_2023-10-16T10-55-21.745604.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T10-55-21.745604.parquet"]}]}]}
2023-10-16T09:55:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model microsoft/CodeGPT-small-py on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-16T10:55:21.745604(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model microsoft/CodeGPT-small-py on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T10:55:21.745604(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model microsoft/CodeGPT-small-py on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-16T10:55:21.745604(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 20, 31, 168, 66, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of microsoft/CodeGPT-small-py## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model microsoft/CodeGPT-small-py on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T10:55:21.745604(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]
fba13360b70420e10397d77f55b4b62310d74285
SpanEx consists of 7071 instances annotated for span interactions. SpanEx is the first dataset with human phrase-level interaction explanations with explicit labels for interaction types. Moreover, SpanEx is annotated by three annotators, which opens new avenues for studies of human explanation agreement -- an understudied area in the explainability literature. Our study reveals that while human annotators often agree on span interactions, they also offer complementary reasons for a prediction, collectively providing a comprehensive set of reasons for a prediction. We collect explanations of span interactions for NLI on the SNLI dataset and for FC on the FEVER dataset. Please cite the following paper if you use this dataset: ``` @inproceedings{choudhury-etal-2023-explaining, title = "Explaining Interactions Between Text Spans", author = "Choudhury, Sagnik and Atanasova, Pepa and Augenstein, Isabelle", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.783", doi = "10.18653/v1/2023.emnlp-main.783", pages = "12709--12730", abstract = "Reasoning over spans of tokens from different parts of the input is essential for natural language understanding (NLU) tasks such as fact-checking (FC), machine reading comprehension (MRC) or natural language inference (NLI). However, existing highlight-based explanations primarily focus on identifying individual important features or interactions only between adjacent tokens or tuples of tokens. Most notably, there is a lack of annotations capturing the human decision-making process with respect to the necessary interactions for informed decision-making in such tasks. To bridge this gap, we introduce SpanEx, a multi-annotator dataset of human span interaction explanations for two NLU tasks: NLI and FC. We then investigate the decision-making processes of multiple fine-tuned large language models in terms of the employed connections between spans in separate parts of the input and compare them to the human reasoning processes. Finally, we present a novel community detection based unsupervised method to extract such interaction explanations. We make the code and the dataset available on [Github](https://github.com/copenlu/spanex). The dataset is also available on [Huggingface datasets](https://huggingface.co/datasets/copenlu/spanex).", } ```
copenlu/spanex
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:mit", "rationale-extraction", "reasoning", "nli", "fact-checking", "explainability", "region:us" ]
2023-10-16T10:11:41+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "spanex", "tags": ["rationale-extraction", "reasoning", "nli", "fact-checking", "explainability"], "configs": [{"config_name": "snli_extended", "data_files": [{"split": "test", "path": "snli_extended.jsonl"}]}, {"config_name": "fever_extended", "data_files": [{"split": "test", "path": "fever_extended.jsonl"}]}, {"config_name": "snli", "data_files": [{"split": "test", "path": "snli.jsonl"}]}, {"config_name": "fever", "data_files": [{"split": "test", "path": "fever.jsonl"}]}]}
2023-12-18T20:12:45+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #rationale-extraction #reasoning #nli #fact-checking #explainability #region-us
SpanEx consists of 7071 instances annotated for span interactions. SpanEx is the first dataset with human phrase-level interaction explanations with explicit labels for interaction types. Moreover, SpanEx is annotated by three annotators, which opens new avenues for studies of human explanation agreement -- an understudied area in the explainability literature. Our study reveals that while human annotators often agree on span interactions, they also offer complementary reasons for a prediction, collectively providing a comprehensive set of reasons for a prediction. We collect explanations of span interactions for NLI on the SNLI dataset and for FC on the FEVER dataset. Please cite the following paper if you use this dataset:
[]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #rationale-extraction #reasoning #nli #fact-checking #explainability #region-us \n" ]
[ 60 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-mit #rationale-extraction #reasoning #nli #fact-checking #explainability #region-us \n" ]
c979aaa3723ab1da50949bbe28a9edaabfb2fc78
# Dataset Card for "chinese_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/chinese_train
[ "region:us" ]
2023-10-16T10:19:14+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "tweet", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6103621, "num_examples": 46049}], "download_size": 4415296, "dataset_size": 6103621}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:19:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "chinese_train" More Information needed
[ "# Dataset Card for \"chinese_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"chinese_train\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"chinese_train\"\n\nMore Information needed" ]
ffe0327cde226c705a84a23f7ce82250cae32d0b
# Dataset Card for "chinese_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/chinese_test
[ "region:us" ]
2023-10-16T10:19:16+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "tweet", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1574339, "num_examples": 11513}], "download_size": 1143927, "dataset_size": 1574339}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:19:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for "chinese_test" More Information needed
[ "# Dataset Card for \"chinese_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"chinese_test\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"chinese_test\"\n\nMore Information needed" ]
9f4aad9fcccd6519789771faf208b8f5c834b87b
# Dataset Card for "french_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/french_train
[ "region:us" ]
2023-10-16T10:20:26+00:00
{"dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2689968, "num_examples": 7726}], "download_size": 1740195, "dataset_size": 2689968}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:20:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "french_train" More Information needed
[ "# Dataset Card for \"french_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"french_train\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"french_train\"\n\nMore Information needed" ]
0bc30e25d0a2f5664f861ff734b27f81deb3119e
# Dataset Card for "hindi_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/hindi_train
[ "region:us" ]
2023-10-16T10:20:51+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 318038, "num_examples": 2212}], "download_size": 202043, "dataset_size": 318038}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:20:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hindi_train" More Information needed
[ "# Dataset Card for \"hindi_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hindi_train\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hindi_train\"\n\nMore Information needed" ]
bf866e28ce957a2fbbbcfb73e67b2f5ce42e78ab
# Dataset Card for "hindi_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/hindi_test
[ "region:us" ]
2023-10-16T10:20:52+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 81183, "num_examples": 554}], "download_size": 53504, "dataset_size": 81183}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:21:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hindi_test" More Information needed
[ "# Dataset Card for \"hindi_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hindi_test\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"hindi_test\"\n\nMore Information needed" ]
cc63e67aadf0d82e8873a0f04d01f9956a82614c
# Dataset Card for "russian_train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/russian_train
[ "region:us" ]
2023-10-16T10:21:33+00:00
{"dataset_info": {"features": [{"name": "sentiment", "dtype": "string"}, {"name": "tweet", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 315825, "num_examples": 1040}], "download_size": 174896, "dataset_size": 315825}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:21:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "russian_train" More Information needed
[ "# Dataset Card for \"russian_train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"russian_train\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"russian_train\"\n\nMore Information needed" ]
4de00a5df9da5806c965556e6b9f34ed1048eebc
# Dataset Card for "russian_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/russian_test
[ "region:us" ]
2023-10-16T10:21:36+00:00
{"dataset_info": {"features": [{"name": "sentiment", "dtype": "string"}, {"name": "tweet", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 75684, "num_examples": 260}], "download_size": 43226, "dataset_size": 75684}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T10:21:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "russian_test" More Information needed
[ "# Dataset Card for \"russian_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"russian_test\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"russian_test\"\n\nMore Information needed" ]
0ed65c448fb1eb60dd36df70e6ff4add24a8c13e
# Dataset Card for "slovene_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tngarg/Slovene_english
[ "region:us" ]
2023-10-16T10:22:01+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1887281, "num_examples": 17479}], "download_size": 1288565, "dataset_size": 1887281}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-08T18:19:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "slovene_test" More Information needed
[ "# Dataset Card for \"slovene_test\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"slovene_test\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"slovene_test\"\n\nMore Information needed" ]
92fda6688f1900e8ff79170a7c26dda367074131
# Dataset Card for "svamp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dahoas/svamp
[ "region:us" ]
2023-10-16T10:24:12+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 347184, "num_examples": 700}, {"name": "test", "num_bytes": 148692, "num_examples": 300}], "download_size": 169303, "dataset_size": 495876}}
2023-10-16T10:27:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "svamp" More Information needed
[ "# Dataset Card for \"svamp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"svamp\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"svamp\"\n\nMore Information needed" ]
ef92f641b78145859a247c5da65b9e2febe611fa
Dataset from https://www.kaggle.com/datasets/rounakbanik/pokemon
silaseic/pokemon
[ "license:unknown", "region:us" ]
2023-10-16T10:29:11+00:00
{"license": "unknown"}
2023-10-16T10:30:54+00:00
[]
[]
TAGS #license-unknown #region-us
Dataset from URL
[]
[ "TAGS\n#license-unknown #region-us \n" ]
[ 13 ]
[ "passage: TAGS\n#license-unknown #region-us \n" ]
179e33e0faa39f377e9a2692b034c8c7b8de8e13
# Dataset Card for "prompted_svamp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dahoas/prompted_svamp
[ "region:us" ]
2023-10-16T10:30:13+00:00
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1179345, "num_examples": 700}, {"name": "test", "num_bytes": 499449, "num_examples": 300}], "download_size": 702499, "dataset_size": 1678794}}
2023-10-16T10:30:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "prompted_svamp" More Information needed
[ "# Dataset Card for \"prompted_svamp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"prompted_svamp\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"prompted_svamp\"\n\nMore Information needed" ]
c8f9d42873b9157acec4a651b9f65dcd0ef9b384
# Dataset Card for "0ed37a8a" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
result-kand2-sdxl-wuerst-karlo/0ed37a8a
[ "region:us" ]
2023-10-16T11:33:03+00:00
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 171, "num_examples": 10}], "download_size": 1326, "dataset_size": 171}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-10-16T11:33:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "0ed37a8a" More Information needed
[ "# Dataset Card for \"0ed37a8a\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"0ed37a8a\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"0ed37a8a\"\n\nMore Information needed" ]
aa9522979a48c42eac200ed1c51bc35362ccbe74
The dataset has been created using [OliveFarm](https://huggingface.co/spaces/OdiaGenAI/Olive_Farm) web application. Following domains have been covered in this dataset:- - Art - Sports (Cricket, Football, Olympics) - Politics - History - Cooking - Environment - Music Contributors: - - Shahid - Parul.
OdiaGenAI/instruction_set_hindi_1035
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:hi", "art", "sports", "politics", "india", "music", "cooking", "region:us" ]
2023-10-16T12:02:20+00:00
{"language": ["hi"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "tags": ["art", "sports", "politics", "india", "music", "cooking"]}
2023-10-16T12:09:51+00:00
[]
[ "hi" ]
TAGS #task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #art #sports #politics #india #music #cooking #region-us
The dataset has been created using OliveFarm web application. Following domains have been covered in this dataset:- - Art - Sports (Cricket, Football, Olympics) - Politics - History - Cooking - Environment - Music Contributors: - - Shahid - Parul.
[]
[ "TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #art #sports #politics #india #music #cooking #region-us \n" ]
[ 49 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Hindi #art #sports #politics #india #music #cooking #region-us \n" ]
121fb02a07d58ac97992daa60879cb92b90aad6f
# Dataset Card for "Collective Cognition ChatGPT Conversations" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description ### Dataset Summary The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis. Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai ### Supported Tasks - **Language Model Finetuning**: Training through behavior cloning and knowledge instilling. - **Dialogue Understanding**: Analyzing the flow and structure of conversations. - **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues. - **Topic Modeling**: Determining the topics discussed within the chats. ### Languages The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats. ## Dataset Structure ### Data Instances A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt"). The dataset is formatted to conform with ShareGPT standard. This is an example entry in the dataset, noting that turns are variable and can be many or as few as two: ```json { "title": "Interest Formulas Clarification", "model_name": "GPT-4", "custom_instruction": false, "conversations": [ { "from": "human", "value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly." }, { "from": "assistant", "value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term." } ] } ``` ### Data Splits Currently, the dataset is not divided into specific splits (train, test, validation). ## Dataset Creation ### Curation Rationale The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources. ### Source Data The data originates from user contributions on the "Collective Cognition" website. ### Personal and Sensitive Information All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email [email protected] ## Considerations for Using the Data ### Social Impact of Dataset The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration. ### Discussion of Biases There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most. ### Other Known Limitations The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT. ## Additional Information ### Licensing Information MIT
CollectiveCognition/chats-data-2023-10-16
[ "license:mit", "region:us" ]
2023-10-16T12:06:41+00:00
{"license": "mit"}
2023-10-16T12:07:57+00:00
[]
[]
TAGS #license-mit #region-us
# Dataset Card for "Collective Cognition ChatGPT Conversations" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description ### Dataset Summary The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis. Visit Collective Cognition to browse or contribute chats here: URL ### Supported Tasks - Language Model Finetuning: Training through behavior cloning and knowledge instilling. - Dialogue Understanding: Analyzing the flow and structure of conversations. - Sentiment Analysis: Understanding the sentiment or emotion in the dialogues. - Topic Modeling: Determining the topics discussed within the chats. ### Languages The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats. ## Dataset Structure ### Data Instances A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt"). The dataset is formatted to conform with ShareGPT standard. This is an example entry in the dataset, noting that turns are variable and can be many or as few as two: ### Data Splits Currently, the dataset is not divided into specific splits (train, test, validation). ## Dataset Creation ### Curation Rationale The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources. ### Source Data The data originates from user contributions on the "Collective Cognition" website. ### Personal and Sensitive Information All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL ## Considerations for Using the Data ### Social Impact of Dataset The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration. ### Discussion of Biases There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most. ### Other Known Limitations The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT. ## Additional Information ### Licensing Information MIT
[ "# Dataset Card for \"Collective Cognition ChatGPT Conversations\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description", "### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL", "### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.", "### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.", "## Dataset Structure", "### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:", "### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).", "## Dataset Creation", "### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.", "### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.", "### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL", "## Considerations for Using the Data", "### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.", "### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.", "### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.", "## Additional Information", "### Licensing Information\nMIT" ]
[ "TAGS\n#license-mit #region-us \n", "# Dataset Card for \"Collective Cognition ChatGPT Conversations\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description", "### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL", "### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.", "### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.", "## Dataset Structure", "### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:", "### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).", "## Dataset Creation", "### Curation Rationale\nThe dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.", "### Source Data\nThe data originates from user contributions on the \"Collective Cognition\" website.", "### Personal and Sensitive Information\nAll chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@URL", "## Considerations for Using the Data", "### Social Impact of Dataset\nThe dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.", "### Discussion of Biases\nThere might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.", "### Other Known Limitations\nThe dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.", "## Additional Information", "### Licensing Information\nMIT" ]
[ 11, 19, 116, 4, 107, 75, 32, 6, 99, 28, 5, 38, 24, 52, 8, 40, 55, 42, 5, 7 ]
[ "passage: TAGS\n#license-mit #region-us \n# Dataset Card for \"Collective Cognition ChatGPT Conversations\"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description### Dataset Summary\nThe \"Collective Cognition ChatGPT Conversations\" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the \"Collective Cognition\" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.\n\nVisit Collective Cognition to browse or contribute chats here: URL### Supported Tasks\n- Language Model Finetuning: Training through behavior cloning and knowledge instilling.\n- Dialogue Understanding: Analyzing the flow and structure of conversations.\n- Sentiment Analysis: Understanding the sentiment or emotion in the dialogues.\n- Topic Modeling: Determining the topics discussed within the chats.### Languages\nThe primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.## Dataset Structure### Data Instances\nA typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either \"human\" or \"gpt\").\n\nThe dataset is formatted to conform with ShareGPT standard.\n\nThis is an example entry in the dataset, noting that turns are variable and can be many or as few as two:### Data Splits\nCurrently, the dataset is not divided into specific splits (train, test, validation).## Dataset Creation" ]
5e014734896fbb1740f2673d8b36536976762193
## Dataset Summary The dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and suicide questions. # Dataset Structure The dataset has 14912 rows (7456 considered as Suicidal Ideation/Behavior and 7456 considered as questions about suicide). ## Dataset fields * `Text`: User comment. * `Label`: 1 if suicidal ideation/behavior; 0 if question about suicide. * `Dataset`: Source of the comment # Dataset Creation * 7456 Suicidal Ideation/Behavior from the [suicide dataset](https://huggingface.co/datasets/joheras/spanish-suicide-intent). * 7456 questions about suicide automatically generated from suicide documents. # Considerations for Using the Data ## Social Impact of Dataset The dataset could contain some patterns to detect suicidal ideation/behavior. ## Discussion of Biases No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide). # Additional Information ## Team * [joheras](https://huggingface.co/joheras)
PrevenIA/spanish-suicide-intent-information
[ "task_categories:text-classification", "language:es", "license:cc", "region:us" ]
2023-10-16T12:36:22+00:00
{"language": ["es"], "license": "cc", "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Text", "dtype": "string"}, {"name": "Label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2025273, "num_examples": 10736}, {"name": "val", "num_bytes": 236851, "num_examples": 1193}, {"name": "test", "num_bytes": 562511, "num_examples": 2983}], "download_size": 1694607, "dataset_size": 2824635}}
2023-10-16T12:47:11+00:00
[]
[ "es" ]
TAGS #task_categories-text-classification #language-Spanish #license-cc #region-us
## Dataset Summary The dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and suicide questions. # Dataset Structure The dataset has 14912 rows (7456 considered as Suicidal Ideation/Behavior and 7456 considered as questions about suicide). ## Dataset fields * 'Text': User comment. * 'Label': 1 if suicidal ideation/behavior; 0 if question about suicide. * 'Dataset': Source of the comment # Dataset Creation * 7456 Suicidal Ideation/Behavior from the suicide dataset. * 7456 questions about suicide automatically generated from suicide documents. # Considerations for Using the Data ## Social Impact of Dataset The dataset could contain some patterns to detect suicidal ideation/behavior. ## Discussion of Biases No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because the Internet is used more by younger people. # Additional Information ## Team * joheras
[ "## Dataset Summary\n\nThe dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and suicide questions.", "# Dataset Structure\n\nThe dataset has 14912 rows (7456 considered as Suicidal Ideation/Behavior and 7456 considered as questions about suicide).", "## Dataset fields\n\n* 'Text': User comment.\n* 'Label': 1 if suicidal ideation/behavior; 0 if question about suicide.\n* 'Dataset': Source of the comment", "# Dataset Creation\n\n* 7456 Suicidal Ideation/Behavior from the suicide dataset.\n* 7456 questions about suicide automatically generated from suicide documents.", "# Considerations for Using the Data", "## Social Impact of Dataset\n\nThe dataset could contain some patterns to detect suicidal ideation/behavior.", "## Discussion of Biases\n\nNo measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because the Internet is used more by younger people.", "# Additional Information", "## Team\n\n* joheras" ]
[ "TAGS\n#task_categories-text-classification #language-Spanish #license-cc #region-us \n", "## Dataset Summary\n\nThe dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and suicide questions.", "# Dataset Structure\n\nThe dataset has 14912 rows (7456 considered as Suicidal Ideation/Behavior and 7456 considered as questions about suicide).", "## Dataset fields\n\n* 'Text': User comment.\n* 'Label': 1 if suicidal ideation/behavior; 0 if question about suicide.\n* 'Dataset': Source of the comment", "# Dataset Creation\n\n* 7456 Suicidal Ideation/Behavior from the suicide dataset.\n* 7456 questions about suicide automatically generated from suicide documents.", "# Considerations for Using the Data", "## Social Impact of Dataset\n\nThe dataset could contain some patterns to detect suicidal ideation/behavior.", "## Discussion of Biases\n\nNo measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because the Internet is used more by younger people.", "# Additional Information", "## Team\n\n* joheras" ]
[ 27, 37, 37, 46, 36, 8, 25, 70, 5, 5 ]
[ "passage: TAGS\n#task_categories-text-classification #language-Spanish #license-cc #region-us \n## Dataset Summary\n\nThe dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and suicide questions.# Dataset Structure\n\nThe dataset has 14912 rows (7456 considered as Suicidal Ideation/Behavior and 7456 considered as questions about suicide).## Dataset fields\n\n* 'Text': User comment.\n* 'Label': 1 if suicidal ideation/behavior; 0 if question about suicide.\n* 'Dataset': Source of the comment# Dataset Creation\n\n* 7456 Suicidal Ideation/Behavior from the suicide dataset.\n* 7456 questions about suicide automatically generated from suicide documents.# Considerations for Using the Data## Social Impact of Dataset\n\nThe dataset could contain some patterns to detect suicidal ideation/behavior.## Discussion of Biases\n\nNo measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because the Internet is used more by younger people.# Additional Information## Team\n\n* joheras" ]
546493d76351e3503fbaeba0e1c57e15372633c2
# Dataset Card for "Soldering-Data-Tiny-1016-solder-pad" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AndyLiu0104/Soldering-Data-Tiny-1016-solder-pad
[ "region:us" ]
2023-10-16T12:36:24+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16637648.25, "num_examples": 9606}], "download_size": 10646978, "dataset_size": 16637648.25}}
2023-10-16T12:41:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Soldering-Data-Tiny-1016-solder-pad" More Information needed
[ "# Dataset Card for \"Soldering-Data-Tiny-1016-solder-pad\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Soldering-Data-Tiny-1016-solder-pad\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Soldering-Data-Tiny-1016-solder-pad\"\n\nMore Information needed" ]
86bff66e648548d1f83d359f4df2a17efe38292e
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
pclerc/test1
[ "task_categories:question-answering", "size_categories:1M<n<10M", "language:fr", "license:eupl-1.1", "region:us" ]
2023-10-16T12:45:23+00:00
{"language": ["fr"], "license": "eupl-1.1", "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"]}
2023-10-16T12:53:18+00:00
[]
[ "fr" ]
TAGS #task_categories-question-answering #size_categories-1M<n<10M #language-French #license-eupl-1.1 #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-question-answering #size_categories-1M<n<10M #language-French #license-eupl-1.1 #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 44, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-1M<n<10M #language-French #license-eupl-1.1 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
e9d8d42d9f9f2fbfa876f947323704c55ef34686
# Dataset Card for "korean-child-command-voice_train-0-10000_smaplingRate-16000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
haseong8012/child-10k
[ "region:us" ]
2023-10-16T13:20:16+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "audio", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 2077216016, "num_examples": 10000}], "download_size": 1810220972, "dataset_size": 2077216016}}
2023-10-16T14:06:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "korean-child-command-voice_train-0-10000_smaplingRate-16000" More Information needed
[ "# Dataset Card for \"korean-child-command-voice_train-0-10000_smaplingRate-16000\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"korean-child-command-voice_train-0-10000_smaplingRate-16000\"\n\nMore Information needed" ]
[ 6, 35 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"korean-child-command-voice_train-0-10000_smaplingRate-16000\"\n\nMore Information needed" ]
1beac1b941da76a9c51e3e5b39d230fde9a80983
Conventions followed to decide the polarity: - - labels consisting of a single value are left undisturbed, i.e. if label = 'pos', then it'll be pos - labels consisting of multiple values separated by '&' are processed. If all the labels are the same ('pos&pos&pos' or 'neg&neg'), then the shortened form of the multiple label is assigned as the final label. For example, if label = 'pos&pos&pos', then final label will be 'pos'. - labels consisting of mixed values ('pos&neg&pos' or 'neg&neu&pos') are rejected. Contributors: - - Kusumlata Patiyal
OdiaGenAI/sentiment_analysis_hindi
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:hi", "region:us" ]
2023-10-16T13:22:08+00:00
{"language": ["hi"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
2023-10-16T13:34:14+00:00
[]
[ "hi" ]
TAGS #task_categories-text-classification #size_categories-1K<n<10K #language-Hindi #region-us
Conventions followed to decide the polarity: - - labels consisting of a single value are left undisturbed, i.e. if label = 'pos', then it'll be pos - labels consisting of multiple values separated by '&' are processed. If all the labels are the same ('pos&pos&pos' or 'neg&neg'), then the shortened form of the multiple label is assigned as the final label. For example, if label = 'pos&pos&pos', then final label will be 'pos'. - labels consisting of mixed values ('pos&neg&pos' or 'neg&neu&pos') are rejected. Contributors: - - Kusumlata Patiyal
[]
[ "TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Hindi #region-us \n" ]
[ 33 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Hindi #region-us \n" ]
b25c122290535563729eb49ac917ce4c5ddf6718
# Dataset Card for Evaluation run of ajibawa-2023/scarlett-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/ajibawa-2023/scarlett-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** [email protected] ### Dataset Summary Dataset automatically created during the evaluation run of model [ajibawa-2023/scarlett-7b](https://huggingface.co/ajibawa-2023/scarlett-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ajibawa-2023__scarlett-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-10-28T20:24:47.914205](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-7b/blob/main/results_2023-10-28T20-24-47.914205.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "em": 0.06103187919463087, "em_stderr": 0.002451565190705489, "f1": 0.12157193791946294, "f1_stderr": 0.002704445932722437, "acc": 0.3622108542921648, "acc_stderr": 0.007057235105359207 }, "harness|drop|3": { "em": 0.06103187919463087, "em_stderr": 0.002451565190705489, "f1": 0.12157193791946294, "f1_stderr": 0.002704445932722437 }, "harness|gsm8k|5": { "acc": 0.003032600454890068, "acc_stderr": 0.0015145735612245386 }, "harness|winogrande|5": { "acc": 0.7213891081294396, "acc_stderr": 0.012599896649493875 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
open-llm-leaderboard/details_ajibawa-2023__scarlett-7b
[ "region:us" ]
2023-10-16T13:25:25+00:00
{"pretty_name": "Evaluation run of ajibawa-2023/scarlett-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [ajibawa-2023/scarlett-7b](https://huggingface.co/ajibawa-2023/scarlett-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ajibawa-2023__scarlett-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T20:24:47.914205](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__scarlett-7b/blob/main/results_2023-10-28T20-24-47.914205.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06103187919463087,\n \"em_stderr\": 0.002451565190705489,\n \"f1\": 0.12157193791946294,\n \"f1_stderr\": 0.002704445932722437,\n \"acc\": 0.3622108542921648,\n \"acc_stderr\": 0.007057235105359207\n },\n \"harness|drop|3\": {\n \"em\": 0.06103187919463087,\n \"em_stderr\": 0.002451565190705489,\n \"f1\": 0.12157193791946294,\n \"f1_stderr\": 0.002704445932722437\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.003032600454890068,\n \"acc_stderr\": 0.0015145735612245386\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7213891081294396,\n \"acc_stderr\": 0.012599896649493875\n }\n}\n```", "repo_url": "https://huggingface.co/ajibawa-2023/scarlett-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T14_25_21.080350", "path": ["**/details_harness|drop|3_2023-10-16T14-25-21.080350.parquet"]}, {"split": "2023_10_28T20_24_47.914205", "path": ["**/details_harness|drop|3_2023-10-28T20-24-47.914205.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T20-24-47.914205.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T14_25_21.080350", "path": ["**/details_harness|gsm8k|5_2023-10-16T14-25-21.080350.parquet"]}, {"split": "2023_10_28T20_24_47.914205", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-24-47.914205.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T20-24-47.914205.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T14_25_21.080350", "path": ["**/details_harness|winogrande|5_2023-10-16T14-25-21.080350.parquet"]}, {"split": "2023_10_28T20_24_47.914205", "path": ["**/details_harness|winogrande|5_2023-10-28T20-24-47.914205.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T20-24-47.914205.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_16T14_25_21.080350", "path": ["results_2023-10-16T14-25-21.080350.parquet"]}, {"split": "2023_10_28T20_24_47.914205", "path": ["results_2023-10-28T20-24-47.914205.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T20-24-47.914205.parquet"]}]}]}
2023-10-28T19:24:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ajibawa-2023/scarlett-7b ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: clementine@URL ### Dataset Summary Dataset automatically created during the evaluation run of model ajibawa-2023/scarlett-7b on the Open LLM Leaderboard. The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-10-28T20:24:47.914205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions
[ "# Dataset Card for Evaluation run of ajibawa-2023/scarlett-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T20:24:47.914205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ajibawa-2023/scarlett-7b", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL", "### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-10-28T20:24:47.914205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions" ]
[ 6, 19, 31, 167, 67, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ajibawa-2023/scarlett-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model ajibawa-2023/scarlett-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T20:24:47.914205(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions" ]