sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
e3290585c7c08b65826dbf628bb64eb9e3d60e92
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/deberta-v3-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916079
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T21:10:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-31T21:14:24+00:00
6403e178c742dcd7c2b572e9e4df8f33577eb62d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/electra-base-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916081
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T21:10:55+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-31T21:14:00+00:00
6ec84a0ec5da70e845deca75ffa6141a28839907
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/minilm-uncased-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916082
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T21:12:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/minilm-uncased-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-31T21:15:10+00:00
ba06dc05a1b91c497f489bfa9793acdfb4ce06ec
# Dataset Card for GLUE ## Table of Contents - [Dataset Card for GLUE](#dataset-card-for-glue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [ax](#ax) - [cola](#cola) - [mnli](#mnli) - [mnli_matched](#mnli_matched) - [mnli_mismatched](#mnli_mismatched) - [mrpc](#mrpc) - [qnli](#qnli) - [qqp](#qqp) - [rte](#rte) - [sst2](#sst2) - [stsb](#stsb) - [wnli](#wnli) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [ax](#ax-1) - [cola](#cola-1) - [mnli](#mnli-1) - [mnli_matched](#mnli_matched-1) - [mnli_mismatched](#mnli_mismatched-1) - [mrpc](#mrpc-1) - [qnli](#qnli-1) - [qqp](#qqp-1) - [rte](#rte-1) - [sst2](#sst2-1) - [stsb](#stsb-1) - [wnli](#wnli-1) - [Data Fields](#data-fields) - [ax](#ax-2) - [cola](#cola-2) - [mnli](#mnli-2) - [mnli_matched](#mnli_matched-2) - [mnli_mismatched](#mnli_mismatched-2) - [mrpc](#mrpc-2) - [qnli](#qnli-2) - [qqp](#qqp-2) - [rte](#rte-2) - [sst2](#sst2-2) - [stsb](#stsb-2) - [wnli](#wnli-2) - [Data Splits](#data-splits) - [ax](#ax-3) - [cola](#cola-3) - [mnli](#mnli-3) - [mnli_matched](#mnli_matched-3) - [mnli_mismatched](#mnli_mismatched-3) - [mrpc](#mrpc-3) - [qnli](#qnli-3) - [qqp](#qqp-3) - [rte](#rte-3) - [sst2](#sst2-3) - [stsb](#stsb-3) - [wnli](#wnli-3) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 955.33 MB - **Size of the generated dataset:** 229.68 MB - **Total amount of disk used:** 1185.01 MB ### Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks: #### ax A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset. #### cola The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. #### mnli The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. #### mnli_matched The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mnli_mismatched The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mrpc The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. #### qnli The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. #### qqp The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. #### rte The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency. #### sst2 The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels. #### stsb The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. #### wnli The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI). ### Languages The language data in GLUE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances #### ax - **Size of downloaded dataset files:** 0.21 MB - **Size of the generated dataset:** 0.23 MB - **Total amount of disk used:** 0.44 MB An example of 'test' looks as follows. ``` { "premise": "The cat sat on the mat.", "hypothesis": "The cat did not sit on the mat.", "label": -1, "idx: 0 } ``` #### cola - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.58 MB - **Total amount of disk used:** 0.94 MB An example of 'train' looks as follows. ``` { "sentence": "Our friends won't buy this analysis, let alone the next one we propose.", "label": 1, "id": 0 } ``` #### mnli - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 78.65 MB - **Total amount of disk used:** 376.95 MB An example of 'train' looks as follows. ``` { "premise": "Conceptually cream skimming has two basic dimensions - product and geography.", "hypothesis": "Product and geography are what make cream skimming work.", "label": 1, "idx": 0 } ``` #### mnli_matched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.52 MB - **Total amount of disk used:** 301.82 MB An example of 'test' looks as follows. ``` { "premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.", "hypothesis": "Hierbas is a name worth looking out for.", "label": -1, "idx": 0 } ``` #### mnli_mismatched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.73 MB - **Total amount of disk used:** 302.02 MB An example of 'test' looks as follows. ``` { "premise": "What have you decided, what are you going to do?", "hypothesis": "So what's your decision?, "label": -1, "idx": 0 } ``` #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. #### ax - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### cola - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1). - `idx`: a `int32` feature. #### mnli - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_matched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_mismatched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Splits #### ax | |test| |---|---:| |ax |1104| #### cola | |train|validation|test| |----|----:|---------:|---:| |cola| 8551| 1043|1063| #### mnli | |train |validation_matched|validation_mismatched|test_matched|test_mismatched| |----|-----:|-----------------:|--------------------:|-----------:|--------------:| |mnli|392702| 9815| 9832| 9796| 9847| #### mnli_matched | |validation|test| |------------|---------:|---:| |mnli_matched| 9815|9796| #### mnli_mismatched | |validation|test| |---------------|---------:|---:| |mnli_mismatched| 9832|9847| #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{warstadt2018neural, title={Neural Network Acceptability Judgments}, author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R}, journal={arXiv preprint arXiv:1805.12471}, year={2018} } @inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} } Note that each GLUE dataset has its own citation. Please see the source to see the correct citation for each contained dataset. ``` ### Contributions Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
evaluate/glue-ci
[ "task_categories:text-classification", "task_ids:acceptability-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:sentiment-classification", "task_ids:text-scoring", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-08-31T21:17:54+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-classification-other-coreference-nli", "text-classification-other-paraphrase-identification", "text-classification-other-qa-nli", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]}
2022-09-15T19:12:43+00:00
7146c03d31dcc036af4e2b78631a3ba1bd10b883
EricPeter/comments
[ "license:cc0-1.0", "region:us" ]
2022-08-31T21:47:55+00:00
{"license": "cc0-1.0"}
2022-08-31T21:49:02+00:00
fdf89d9ab61732bcb253768750a35dcf7bba9a9e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: ptnv-s/biobert_squad2_cased-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c9381c-14936084
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T22:13:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "ptnv-s/biobert_squad2_cased-finetuned-squad", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-31T22:16:12+00:00
f73936e33d1c4ee021cb17b21e16ffff0ca95b80
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c9381c-14936085
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T22:46:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "gerardozq/biobert_v1.1_pubmed-finetuned-squad", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-31T22:49:22+00:00
d5bf79983aff9a4a44953c5edf97a05393c8ab58
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-d7ce16-14946086
[ "autotrain", "evaluation", "region:us" ]
2022-08-31T23:03:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["mse"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-01T00:06:48+00:00
8591fdc2d9f94cfcd336feedb3002b0fdbc1f3d8
Chr0my/Epidemic_sounds
[ "license:mit", "region:us" ]
2022-09-01T00:01:07+00:00
{"license": "mit"}
2022-09-01T00:19:57+00:00
19c35918209a49548c54478695bbe6b8f0dc758e
This is the dataset used to post-train the [BERTweet](https://huggingface.co/cardiffnlp/twitter-roberta-base) language model on a Masked Language Modeling (MLM) task, resulting in the [CryptoBERT](https://huggingface.co/ElKulako/cryptobert) language model. The dataset contains 3.207 million unique posts from the language domain of cryptocurrency-related social media text. The dataset contains 1.865 million StockTwits posts, 496 thousand tweets, 172 thousand Reddit comments and 664 thousand Telegram messages.
ElKulako/cryptobert-posttrain
[ "license:afl-3.0", "region:us" ]
2022-09-01T03:10:42+00:00
{"license": "afl-3.0"}
2022-09-01T03:22:42+00:00
6c66817025509e853c1c7f3ea268f9fed96e240c
Exterus/Language
[ "license:other", "region:us" ]
2022-09-01T11:33:41+00:00
{"license": "other"}
2022-09-01T11:33:41+00:00
ccab437292d2159fa22f4cd9a97f69ed4db79e2c
mteb/results
[ "benchmark:mteb", "region:us" ]
2022-09-01T13:15:23+00:00
{"benchmark": "mteb", "type": "evaluation", "submission_name": "MTEB"}
2024-02-14T12:31:34+00:00
8982dbea4a595589b7ebe46b3d7eec6707eeea16
# Dataset Card for environmental_claims ## Dataset Description - **Homepage:** [climatebert.ai](https://climatebert.ai) - **Repository:** - **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507) - **Leaderboard:** - **Point of Contact:** [Dominik Stammbach](mailto:[email protected]) ### Dataset Summary We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies. ### Supported Tasks and Leaderboards The dataset supports a binary classification task of whether a given sentence is an environmental claim or not. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances ``` { "text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.", "label": 0 } ``` ### Data Fields - text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts - label: the label (0 -> no environmental claim, 1 -> environmental claim) ### Data Splits The dataset is split into: - train: 2,400 - validation: 300 - test: 300 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts. For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information). #### Who are the source language producers? Mainly large listed companies. ### Annotations #### Annotation process For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information). #### Who are the annotators? The authors and students at University of Zurich with majors in finance and sustainable finance. ### Personal and Sensitive Information Since our text sources contain public information, no personal and sensitive information should be included. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Dominik Stammbach - Nicolas Webersinke - Julia Anna Bingler - Mathias Kraus - Markus Leippold ### Licensing Information This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you are interested in commercial use of the dataset, please contact [[email protected]](mailto:[email protected]). ### Citation Information ```bibtex @misc{stammbach2022environmentalclaims, title = {A Dataset for Detecting Real-World Environmental Claims}, author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus}, year = {2022}, doi = {10.48550/ARXIV.2209.00507}, url = {https://arxiv.org/abs/2209.00507}, publisher = {arXiv}, } ``` ### Contributions Thanks to [@webersni](https://github.com/webersni) for adding this dataset.
climatebert/environmental_claims
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2209.00507", "region:us" ]
2022-09-01T13:19:17+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "EnvironmentalClaims", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no", "1": "yes"}}}}], "splits": [{"name": "train", "num_bytes": 346686, "num_examples": 2117}, {"name": "validation", "num_bytes": 43018, "num_examples": 265}, {"name": "test", "num_bytes": 42810, "num_examples": 265}], "download_size": 272422, "dataset_size": 432514}}
2023-05-23T07:53:10+00:00
4bce21b1f9211f24ff5ec321db8ea10894e3f425
# Dataset Card for "cardiffnlp/tweet_topic_multi" ## Dataset Description - **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824) - **Dataset:** Tweet Topic Dataset - **Domain:** Twitter - **Number of Class:** 19 ### Dataset Summary This is the official repository of TweetTopic (["Twitter Topic Classification , COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels. Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021. See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic. The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` ### Data Splits | split | number of texts | description | |:------------------------|-----:|------:| | test_2020 | 573 | test dataset from September 2019 to August 2020 | | test_2021 | 1679 | test dataset from September 2020 to August 2021 | | train_2020 | 4585 | training dataset from September 2019 to August 2020 | | train_2021 | 1505 | training dataset from September 2020 to August 2021 | | train_all | 6090 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 573 | validation dataset from September 2019 to August 2020 | | validation_2021 | 188 | validation dataset from September 2020 to August 2021 | | train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | test_coling2022_random | 5536 | random split used in the COLING 2022 paper | | train_coling2022_random | 5731 | random split used in the COLING 2022 paper | | test_coling2022 | 5536 | temporal split used in the COLING 2022 paper | | train_coling2022 | 5731 | temporal split used in the COLING 2022 paper | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set). ### Models | model | training data | F1 | F1 (macro) | Accuracy | |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:| | [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 | | [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 | | [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 | | [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 | Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). ## Dataset Structure ### Data Instances An example of `train` looks as follows. ```python { "date": "2021-03-07", "text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000", "id": "1368464923370676231", "label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "label_name": ["film_tv_&_video"] } ``` ### Labels | <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> | |-----------------------------|---------------------|----------------------------|--------------------------| | 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports | | 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure | | 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life | | 4: family | 9: gaming | 14: relationships | | Annotation instructions can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing). The label2id dictionary can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/dataset/label.multi.json). ### Citation Information ``` @inproceedings{dimosthenis-etal-2022-twitter, title = "{T}witter {T}opic {C}lassification", author = "Antypas, Dimosthenis and Ushio, Asahi and Camacho-Collados, Jose and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics" } ```
cardiffnlp/tweet_topic_multi
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2209.09824", "region:us" ]
2022-09-01T13:30:46+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "TweetTopicSingle"}
2024-01-17T14:54:48+00:00
2e11493c1b92c66b3d718b39d13d21c0bcbab1ba
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: bhadresh-savani/roberta-base-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@gmoney](https://huggingface.co/gmoney) for evaluating this model.
autoevaluate/autoeval-staging-eval-emotion-default-139135-14996090
[ "autotrain", "evaluation", "region:us" ]
2022-09-01T14:39:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/roberta-base-emotion", "metrics": ["roc_auc", "mae"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-01T14:39:48+00:00
aff1661b05d3101c728c5383a9c84111d2e1349f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: ericntay/bert-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@gmoney](https://huggingface.co/gmoney) for evaluating this model.
autoevaluate/autoeval-staging-eval-emotion-default-139135-14996091
[ "autotrain", "evaluation", "region:us" ]
2022-09-01T14:39:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "ericntay/bert-finetuned-emotion", "metrics": ["roc_auc", "mae"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-01T14:39:53+00:00
532014b9d4a1dd5c658db790758698c0810d9793
BAJIRAO/spam_data
[ "region:us" ]
2022-09-01T18:44:59+00:00
{}
2022-09-01T19:08:50+00:00
1e636ac88c46ec15dafa23d63d5d28ce8f03df9a
Read this [BLOG](https://neuralmagic.com/blog/classifying-finance-tweets-in-real-time-with-sparse-transformers/) to see how I fine-tuned a sparse transformer on this dataset. ### Dataset Description The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment. 1. The dataset holds 11,932 documents annotated with 3 labels: ```python sentiments = { "LABEL_0": "Bearish", "LABEL_1": "Bullish", "LABEL_2": "Neutral" } ``` The data was collected using the Twitter API. The current dataset supports the multi-class classification task. ### Task: Sentiment Analysis # Data Splits There are 2 splits: train and validation. Below are the statistics: | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 9,938 | | Validation | 2,486 | # Licensing Information The Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License.
zeroshot/twitter-financial-news-sentiment
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "region:us" ]
2022-09-01T20:21:56+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "twitter financial news", "tags": ["twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "markets"]}
2022-12-12T14:32:59+00:00
bacd60959e6e00287ef74c0ebf49fba20dce61b9
Lubub/locutorxxinews
[ "license:apache-2.0", "region:us" ]
2022-09-01T22:56:34+00:00
{"license": "apache-2.0"}
2022-09-01T22:56:34+00:00
dc06182a52cd5bbb6d30a5e2e62a1406dec583dc
Lubub/testexxi
[ "license:apache-2.0", "region:us" ]
2022-09-01T23:05:12+00:00
{"license": "apache-2.0"}
2022-09-01T23:05:12+00:00
87b7a0d1c402dbb481db649569c556d9aa27ac05
# Dataset Card for "cardiffnlp/tweet_topic_single" ## Dataset Description - **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824) - **Dataset:** Tweet Topic Dataset - **Domain:** Twitter - **Number of Class:** 6 ### Dataset Summary This is the official repository of TweetTopic (["Twitter Topic Classification , COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 6 labels. Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021. See [cardiffnlp/tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) for multi label version of TweetTopic. The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7). The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. ### Preprocessing We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. For verified usernames, we replace its display name (or account name) with symbols `{@}`. For example, a tweet ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek ``` is transformed into the following text. ``` Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}} ``` A simple function to format tweet follows below. ```python import re from urlextract import URLExtract extractor = URLExtract() def format_tweet(tweet): # mask web urls urls = extractor.find_urls(tweet) for url in urls: tweet = tweet.replace(url, "{{URL}}") # format twitter account tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) return tweet target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" target_format = format_tweet(target) print(target_format) 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' ``` ### Data Splits | split | number of texts | description | |:------------------------|-----:|------:| | test_2020 | 376 | test dataset from September 2019 to August 2020 | | test_2021 | 1693 | test dataset from September 2020 to August 2021 | | train_2020 | 2858 | training dataset from September 2019 to August 2020 | | train_2021 | 1516 | training dataset from September 2020 to August 2021 | | train_all | 4374 | combined training dataset of `train_2020` and `train_2021` | | validation_2020 | 352 | validation dataset from September 2019 to August 2020 | | validation_2021 | 189 | validation dataset from September 2020 to August 2021 | | train_random | 2830 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | validation_random | 354 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | test_coling2022_random | 3399 | random split used in the COLING 2022 paper | | train_coling2022_random | 3598 | random split used in the COLING 2022 paper | | test_coling2022 | 3399 | temporal split used in the COLING 2022 paper | | train_coling2022 | 3598 | temporal split used in the COLING 2022 paper | For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set). ### Models | model | training data | F1 | F1 (macro) | Accuracy | |:------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:| | [cardiffnlp/roberta-large-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-all) | all (2020 + 2021) | 0.896043 | 0.800061 | 0.896043 | | [cardiffnlp/roberta-base-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-all) | all (2020 + 2021) | 0.887773 | 0.79793 | 0.887773 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all) | all (2020 + 2021) | 0.892499 | 0.774494 | 0.892499 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all) | all (2020 + 2021) | 0.890136 | 0.776025 | 0.890136 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all) | all (2020 + 2021) | 0.894861 | 0.800952 | 0.894861 | | [cardiffnlp/roberta-large-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-2020) | 2020 only | 0.878913 | 0.70565 | 0.878913 | | [cardiffnlp/roberta-base-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-2020) | 2020 only | 0.868281 | 0.729667 | 0.868281 | | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020) | 2020 only | 0.882457 | 0.740187 | 0.882457 | | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020) | 2020 only | 0.87596 | 0.746275 | 0.87596 | | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020) | 2020 only | 0.877732 | 0.746119 | 0.877732 | Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). ## Dataset Structure ### Data Instances An example of `train` looks as follows. ```python { "text": "Game day for {{USERNAME}} U18\u2019s against {{USERNAME}} U18\u2019s. Even though it\u2019s a \u2018home\u2019 game for the people that have settled in Mid Wales it\u2019s still a 4 hour round trip for us up to Colwyn Bay. Still enjoy it though!", "date": "2019-09-08", "label": 4, "id": "1170606779568463874", "label_name": "sports_&_gaming" } ``` ### Label ID The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweet_topic_single/raw/main/dataset/label.single.json). ```python { "arts_&_culture": 0, "business_&_entrepreneurs": 1, "pop_culture": 2, "daily_life": 3, "sports_&_gaming": 4, "science_&_technology": 5 } ``` ### Citation Information ``` @inproceedings{dimosthenis-etal-2022-twitter, title = "{T}witter {T}opic {C}lassification", author = "Antypas, Dimosthenis and Ushio, Asahi and Camacho-Collados, Jose and Neves, Leonardo and Silva, Vitor and Barbieri, Francesco", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics" } ```
cardiffnlp/tweet_topic_single
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:1k<10K", "language:en", "license:other", "arxiv:2209.09824", "region:us" ]
2022-09-01T23:20:17+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "TweetTopicSingle"}
2022-11-27T11:25:34+00:00
fd58a44fc0160dea934912d28c113b39279b92af
xianbao/test
[ "license:apache-2.0", "region:us" ]
2022-09-01T23:50:30+00:00
{"license": "apache-2.0"}
2022-09-01T23:50:30+00:00
3a80376302783b83edcba43d8ef53f49eadb0298
Chr0my/Epidemic_music
[ "license:mit", "region:us" ]
2022-09-02T01:21:42+00:00
{"license": "mit"}
2022-09-02T01:25:43+00:00
69d51c85d30f6f0202c140ecdd40bd010027e59f
tobiaslee/FiCLS
[ "license:afl-3.0", "region:us" ]
2022-09-02T02:14:32+00:00
{"license": "afl-3.0"}
2022-09-02T02:14:32+00:00
73d805de8c0299677d1037085f4272949da330ef
nid989/EssayFroum-Dataset
[ "license:apache-2.0", "region:us" ]
2022-09-02T03:09:43+00:00
{"license": "apache-2.0"}
2022-09-02T03:45:37+00:00
8b820b74765bc3a114dd3d1cbb344ed857bef73b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: sshleifer/distilbart-xsum-9-6 * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Rohil](https://huggingface.co/Rohil) for evaluating this model.
autoevaluate/autoeval-staging-eval-xsum-default-21f5cd-15036097
[ "autotrain", "evaluation", "region:us" ]
2022-09-02T08:24:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-9-6", "metrics": ["accuracy"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-02T08:46:38+00:00
7779c1f5ce465390fae18cef176c52cd371e8618
# Unormalized AMI ```python from datasets import load_dataset ami = load_dataset("speech-seq2seq/ami", "ihm") ``` ## TODO(PVP) - explain exactly what normalization was accepted what wasn't
speech-seq2seq/ami
[ "region:us" ]
2022-09-02T09:47:53+00:00
{}
2022-09-06T22:03:11+00:00
ccc566cd8230464f03b0d045958aef0d4b98398d
# Dataset Card for AIDS ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data)** - **Paper:**: (see citation) - **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-aids) ### Dataset Summary The `AIDS` dataset is a dataset containing compounds checked for evidence of anti-HIV activity.. ### Supported Tasks and Leaderboards `AIDS` should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | medium | | #graphs | 1999 | | average #nodes | 15.5875 | | average #edges | 32.39 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @InProceedings{10.1007/978-3-540-89689-0_33, author="Riesen, Kaspar and Bunke, Horst", editor="da Vitoria Lobo, Niels and Kasparis, Takis and Roli, Fabio and Kwok, James T. and Georgiopoulos, Michael and Anagnostopoulos, Georgios C. and Loog, Marco", title="IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning", booktitle="Structural, Syntactic, and Statistical Pattern Recognition", year="2008", publisher="Springer Berlin Heidelberg", address="Berlin, Heidelberg", pages="287--297", abstract="In recent years the use of graph based representation has gained popularity in pattern recognition and machine learning. As a matter of fact, object representation by means of graphs has a number of advantages over feature vectors. Therefore, various algorithms for graph based machine learning have been proposed in the literature. However, in contrast with the emerging interest in graph based representation, a lack of standardized graph data sets for benchmarking can be observed. Common practice is that researchers use their own data sets, and this behavior cumbers the objective evaluation of the proposed methods. In order to make the different approaches in graph based machine learning better comparable, the present paper aims at introducing a repository of graph data sets and corresponding benchmarks, covering a wide spectrum of different applications.", isbn="978-3-540-89689-0" } ```
graphs-datasets/AIDS
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T09:51:25+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:38:52+00:00
d1caecd9c7c2f81ee392349d0f0fdf5512dd1b26
# Dataset Card for alchemy ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://alchemy.tencent.com/)** - **Paper:**: (see citation) - **Leaderboard:**: [Leaderboard](https://alchemy.tencent.com/) ### Dataset Summary The `alchemy` dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database. ### Supported Tasks and Leaderboards `alchemy` should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 202578 | | average #nodes | 10.101387606810183 | | average #edges | 20.877326870011206 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one) - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license mit. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{DBLP:journals/corr/abs-1906-09427, author = {Guangyong Chen and Pengfei Chen and Chang{-}Yu Hsieh and Chee{-}Kong Lee and Benben Liao and Renjie Liao and Weiwen Liu and Jiezhong Qiu and Qiming Sun and Jie Tang and Richard S. Zemel and Shengyu Zhang}, title = {Alchemy: {A} Quantum Chemistry Dataset for Benchmarking {AI} Models}, journal = {CoRR}, volume = {abs/1906.09427}, year = {2019}, url = {http://arxiv.org/abs/1906.09427}, eprinttype = {arXiv}, eprint = {1906.09427}, timestamp = {Mon, 11 Nov 2019 12:55:11 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1906-09427.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
graphs-datasets/alchemy
[ "task_categories:graph-ml", "arxiv:2007.08663", "arxiv:1906.09427", "region:us" ]
2022-09-02T10:08:39+00:00
{"task_categories": ["graph-ml"], "licence": "mit"}
2023-02-07T16:38:45+00:00
4e808c91a6645b849e607e953196ea97f08d111e
# Dataset Card for aspirin ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `aspirin` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `aspirin` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the full set dataset_pg_list = [Data(graph) for graph in dataset_hf["full"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 111762 | | average #nodes | 21.0 | | average #edges | 303.0447106824262 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-aspirin
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T10:24:39+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:38:29+00:00
2c4c5d74bb0492becb3a3aa6a7f4f0a5493c1220
# Dataset Card for benzene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `benzene` dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `benzene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 527983 | | average #nodes | 12.0 | | average #edges | 129.8848866632322 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-benzene
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T10:28:47+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:38:21+00:00
9435372f87fea2f32c41e31237400884a38c7830
# Dataset Card for ethanol ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `ethanol` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `ethanol` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 455092 | | average #nodes | 9.0 | | average #edges | 72.0 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information Please cite both papers when using these datasets in publications. ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-ethanol
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T10:35:08+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:35:52+00:00
8825653ea5739fd0e81f07ac8b5e7eb943f3a2b2
# Dataset Card for malonaldehyde ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `malonaldehyde` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `malonaldehyde` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 893237 | | average #nodes | 9.0 | | average #edges | 71.99990148202383 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-malonaldehyde
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T10:39:54+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:37:48+00:00
bd27d0058bea2ad52470d9072a3b5da6b97c1ac3
# Dataset Card for VaccinChatNL ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) <!-- - [Curation Rationale](#curation-rationale) --> <!-- - [Source Data](#source-data) --> - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) <!-- - [Social Impact of Dataset](#social-impact-of-dataset) --> - [Discussion of Biases](#discussion-of-biases) <!-- - [Other Known Limitations](#other-known-limitations) --> - [Additional Information](#additional-information) <!-- - [Dataset Curators](#dataset-curators) --> <!-- - [Licensing Information](#licensing-information) --> - [Citation Information](#citation-information) <!-- - [Contributions](#contributions) --> ## Dataset Description <!-- - **Homepage:** - **Repository:** - **Paper:** [To be added] - **Leaderboard:** --> - **Point of Contact:** [Jeska Buhmann](mailto:[email protected]) ### Dataset Summary VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size. ### Supported Tasks and Leaderboards - 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders. ### Languages Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE. ## Dataset Structure ### Data Instances For each instance, there is a string for the user question and a string for the label of the annotated answer. See the [CLiPS / VaccinChatNL dataset viewer](https://huggingface.co/datasets/clips/VaccinChatNL/viewer/clips--VaccinChatNL/train). ``` {"sentence1": "Waar kan ik de bijsluiters van de vaccins vinden?", "label": "faq_ask_bijsluiter"} ``` ### Data Fields - `sentence1`: a string containing the user question - `label`: a string containing the name of the intent (the answer class) ### Data Splits The VaccinChatNL dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the dataset. | Dataset Split | Number of Labeled User Questions in Split | | ------------- | ------------------------------------------ | | Train | 10,542 | | Validation | 1,171 | | Test | 1,170 | ## Dataset Creation <!-- ### Curation Rationale [More Information Needed] --> <!-- ### Source Data [Perhaps a link to vaccinchat.be and some of the website that were used for information] --> <!-- #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] --> ### Annotations #### Annotation process Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (_sentence1-label_ pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels. #### Who are the annotators? The VaccinChatNL data were annotated by members and students of [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/). All annotators have a background in Computational Linguistics. ### Personal and Sensitive Information The data are anonymized in the sense that a user question can never be traced back to a specific individual. ## Considerations for Using the Data <!-- ### Social Impact of Dataset [More Information Needed] --> ### Discussion of Biases This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (_label: nlu_fallback_). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks. <!-- ### Other Known Limitations [Perhaps some information of % of exact overlap between train and test set] --> ## Additional Information <!-- ### Dataset Curators [More Information Needed] --> <!-- ### Licensing Information [More Information Needed] --> ### Citation Information ``` @inproceedings{buhmann-etal-2022-domain, title = "Domain- and Task-Adaptation for {V}accin{C}hat{NL}, a {D}utch {COVID}-19 {FAQ} Answering Corpus and Classification Model", author = "Buhmann, Jeska and De Bruyn, Maxime and Lotfi, Ehsan and Daelemans, Walter", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.312", pages = "3539--3549" } ``` <!-- ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
clips/VaccinChatNL
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:nl", "license:cc-by-4.0", "covid-19", "FAQ", "question-answer pairs", "region:us" ]
2022-09-02T10:52:00+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["nl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "VaccinChatNL", "tags": ["covid-19", "FAQ", "question-answer pairs"]}
2023-03-21T15:22:36+00:00
797ddc673b956eeaa235a6a372e2a29f105e20ba
# Dataset Card for naphthalene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `naphthalene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `naphthalene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 226255 | | average #nodes | 18.0 | | average #edges | 254.73246234354005 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-naphthalene
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T10:54:00+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:38:13+00:00
6311b15ea2069f1726abc865e486c3f7e7977f39
# Dataset Card for salicylic_acid ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `salicylic_acid` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `salicylic_acid` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 220231 | | average #nodes | 16.0 | | average #edges | 208.2681717461586 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-salicylic_acid
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T11:07:48+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:37:57+00:00
02aabb462c01b362f4deee43ff294cf171bb7daf
# Dataset Card for toluene ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `toluene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `toluene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 342790 | | average #nodes | 15.0 | | average #edges | 192.30698588936116 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-toluene
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T11:12:43+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:38:05+00:00
d4da9a780efd59e60bc2887bb69e2953cfb9b4db
# Dataset Card for uracil ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](http://www.sgdml.org/#datasets)** - **Paper:**: (see citation) ### Dataset Summary The `uracil` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively. ### Supported Tasks and Leaderboards `uracil` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | scale | big | | #graphs | 133769 | | average #nodes | 12.0 | | average #edges | 128.88676085818943 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under license unknown. ### Citation Information ``` @inproceedings{Morris+2020, title={TUDataset: A collection of benchmark datasets for learning with graphs}, author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann}, booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)}, archivePrefix={arXiv}, eprint={2007.08663}, url={www.graphlearning.io}, year={2020} } ``` ``` @article{Chmiela_2017, doi = {10.1126/sciadv.1603015}, url = {https://doi.org/10.1126%2Fsciadv.1603015}, year = 2017, month = {may}, publisher = {American Association for the Advancement of Science ({AAAS})}, volume = {3}, number = {5}, author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller}, title = {Machine learning of accurate energy-conserving molecular force fields}, journal = {Science Advances} } ```
graphs-datasets/MD17-uracil
[ "task_categories:graph-ml", "arxiv:2007.08663", "region:us" ]
2022-09-02T11:14:39+00:00
{"task_categories": ["graph-ml"], "licence": "unknown"}
2023-02-07T16:37:39+00:00
b838e714070f32045d057422f620a88bd9689c43
This repo contains converted ECMWF ERA5 reanalysis files for both hourly atmospheric and land variables from Jan 2014 to October 2022. The data has been converted from the downloaded NetCDF files into Zarr using Xarray. Each file is 1 day of reanalysis, and so has 24 timesteps at a 0.25 degree grid resolution. All variables in the reanalysis are included here.
openclimatefix/era5-reanalysis
[ "license:mit", "region:us" ]
2022-09-02T11:37:58+00:00
{"license": "mit"}
2022-12-01T15:18:54+00:00
51204a59442e2b988dd4939ec1c89056f8c949b4
patrickfrank1/chess-pgn-games
[ "license:cc0-1.0", "region:us" ]
2022-09-02T11:51:34+00:00
{"license": "cc0-1.0"}
2022-09-02T13:07:22+00:00
029592ccdb7eae9bd59cb40f0c0b2c665148b2b2
# transformers metrics This dataset contains metrics about the huggingface/transformers package. Number of repositories in the dataset: 27067 Number of packages in the dataset: 823 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/transformers/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![transformers-dependent package star count](./transformers-dependents/resolve/main/transformers-dependent_package_star_count.png) | ![transformers-dependent repository star count](./transformers-dependents/resolve/main/transformers-dependent_repository_star_count.png) There are 65 packages that have more than 1000 stars. There are 140 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [hankcs/HanLP](https://github.com/hankcs/HanLP): 26958 [fastai/fastai](https://github.com/fastai/fastai): 22774 [slundberg/shap](https://github.com/slundberg/shap): 17482 [fastai/fastbook](https://github.com/fastai/fastbook): 16052 [jina-ai/jina](https://github.com/jina-ai/jina): 16052 [huggingface/datasets](https://github.com/huggingface/datasets): 14101 [microsoft/recommenders](https://github.com/microsoft/recommenders): 14017 [borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12872 [flairNLP/flair](https://github.com/flairNLP/flair): 12033 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70487 [hankcs/HanLP](https://github.com/hankcs/HanLP): 26959 [ageron/handson-ml2](https://github.com/ageron/handson-ml2): 22886 [ray-project/ray](https://github.com/ray-project/ray): 22047 [jina-ai/jina](https://github.com/jina-ai/jina): 16052 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14844 [microsoft/recommenders](https://github.com/microsoft/recommenders): 14017 [deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 12617 [flairNLP/flair](https://github.com/flairNLP/flair): 12034 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![transformers-dependent package forks count](./transformers-dependents/resolve/main/transformers-dependent_package_forks_count.png) | ![transformers-dependent repository forks count](./transformers-dependents/resolve/main/transformers-dependent_repository_forks_count.png) There are 55 packages that have more than 200 forks. There are 128 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [hankcs/HanLP](https://github.com/hankcs/HanLP): 7388 [fastai/fastai](https://github.com/fastai/fastai): 7297 [fastai/fastbook](https://github.com/fastai/fastbook): 6033 [slundberg/shap](https://github.com/slundberg/shap): 2646 [microsoft/recommenders](https://github.com/microsoft/recommenders): 2473 [allenai/allennlp](https://github.com/allenai/allennlp): 2218 [jina-ai/clip-as-service](https://github.com/jina-ai/clip-as-service): 1972 [jina-ai/jina](https://github.com/jina-ai/jina): 1967 [flairNLP/flair](https://github.com/flairNLP/flair): 1934 [huggingface/datasets](https://github.com/huggingface/datasets): 1841 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16159 [ageron/handson-ml2](https://github.com/ageron/handson-ml2): 11053 [hankcs/HanLP](https://github.com/hankcs/HanLP): 7389 [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493 [deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 4933 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4106 [ray-project/ray](https://github.com/ray-project/ray): 3876 [apache/beam](https://github.com/apache/beam): 3648 [plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795 [microsoft/recommenders](https://github.com/microsoft/recommenders): 2473
open-source-metrics/transformers-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-02T12:05:00+00:00
{"license": "apache-2.0", "pretty_name": "transformers metrics", "tags": ["github-stars"]}
2024-02-17T02:33:56+00:00
30ec6a996b5554d1f4294ca4c6b2879926981728
lewtun/music_classification
[ "license:unknown", "region:us" ]
2022-09-02T12:47:06+00:00
{"license": "unknown"}
2022-09-02T16:08:02+00:00
499bfa2c7cd0923311f8f2c4b24c5ffe462db922
# AutoTrain Dataset for project: dog-classifiers ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project dog-classifiers. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<474x592 RGB PIL image>", "target": 1 }, { "image": "<474x296 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=5, names=['akita inu', 'corgi', 'leonberger', 'samoyed', 'shiba inu'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 598 | | valid | 150 |
julien-c/autotrain-data-dog-classifiers
[ "task_categories:image-classification", "region:us" ]
2022-09-02T14:21:11+00:00
{"task_categories": ["image-classification"]}
2022-09-02T15:13:38+00:00
61c35ebc14a9aec260ece1cb8061d3997663ea37
# STT-2 Spanish ## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2) #### For more information check the official [Model Card](https://huggingface.co/datasets/sst2)
mrm8488/sst2-es-mt
[ "task_categories:text-classification", "task_ids:sentiment-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:sst2", "language:es", "license:unknown", "region:us" ]
2022-09-02T19:28:50+00:00
{"language": ["es"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["sst2"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Stanford Sentiment Treebank v2"}
2022-09-03T15:41:42+00:00
f881ecdb455e1ef7b7e70164df594a98ddf3424e
# GoEmotions Spanish ## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [GoEmotions](https://huggingface.co/datasets/sst2) dataset. #### For more information check the official [Model Card](https://huggingface.co/datasets/go_emotions)
mrm8488/go_emotions-es-mt
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:go_emotions", "language:es", "license:apache-2.0", "emotion", "region:us" ]
2022-09-02T19:59:52+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["go_emotions"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "GoEmotions", "tags": ["emotion"]}
2022-10-20T18:23:36+00:00
21747468e4ffa56f4d4352d1cac863e46ca6b68f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-large-book-summary * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-billsum-default-6d3727-15406134
[ "autotrain", "evaluation", "region:us" ]
2022-09-02T22:05:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}}
2022-09-03T14:34:02+00:00
0bb175d32c10b0d335b2b6c845f63669f7f7cc41
### dataset description We downloaded open-reaction-database(ORD) dataset from [here](https://github.com/open-reaction-database/ord-data). As a preprocess, we removed overlapping data and canonicalized them using RDKit. We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit. ```python: from rdkit import Chem def canonicalize(mol): mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True) return mol ``` We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1.
sagawa/ord-uniq-canonicalized
[ "task_categories:text2text-generation", "task_categories:translation", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "license:apache-2.0", "ord", "chemical", "reaction", "region:us" ]
2022-09-03T03:28:23+00:00
{"annotations_creators": [], "language_creators": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "canonicalized ORD", "tags": ["ord", "chemical", "reaction"]}
2022-09-04T01:41:10+00:00
f83219601635a0a80fc99c13a9ca37f99ef34f0a
### dataset description We downloaded PubChem-10m dataset from [here](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip) and canonicalized it. We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit. ```python: from rdkit import Chem def canonicalize(mol): mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True) return mol ``` We randomly split the preprocessed data into train and validation. The ratio is 9 : 1.
sagawa/pubchem-10m-canonicalized
[ "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "license:apache-2.0", "PubChem", "chemical", "SMILES", "region:us" ]
2022-09-03T04:35:49+00:00
{"annotations_creators": [], "language_creators": ["expert-generated"], "language": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "canonicalized PubChem-10m", "tags": ["PubChem", "chemical", "SMILES"]}
2022-09-04T01:18:37+00:00
5497e797c551617bc1d94a859e4f3429f3d0b32d
### dataset description We downloaded ZINC dataset from [here](https://zinc15.docking.org/) and canonicalized it. We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit. ```python: from rdkit import Chem def canonicalize(mol): mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True) return mol ``` We randomly split the preprocessed data into train and validation. The ratio is 9 : 1.
sagawa/ZINC-canonicalized
[ "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "license:apache-2.0", "ZINC", "chemical", "SMILES", "region:us" ]
2022-09-03T05:01:18+00:00
{"annotations_creators": [], "language_creators": ["expert-generated"], "language": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "canonicalized ZINC", "tags": ["ZINC", "chemical", "SMILES"]}
2022-09-04T01:21:08+00:00
0b533459841603d5e5c20c41291bc8c981c49546
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: navsad/navid_test_bert * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@yooo](https://huggingface.co/yooo) for evaluating this model.
autoevaluate/autoeval-staging-eval-glue-cola-42256f-15426136
[ "autotrain", "evaluation", "region:us" ]
2022-09-03T12:50:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "navsad/navid_test_bert", "metrics": [], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-09-03T12:50:56+00:00
380447bc4f2c1e1693b4b8ffaf104c26b095b6f7
ndrnks/Lookhere
[ "size_categories:n<1K", "Cat", "Car", "Person", "region:us" ]
2022-09-03T15:01:23+00:00
{"size_categories": ["n<1K"], "pretty_name": "Indysrelinkobserver", "tags": ["Cat", "Car", "Person"]}
2023-08-13T08:09:57+00:00
7e22c8f616d706bebd86162860feabcf1c6affc4
# Dataset Card for Yandex_Jobs ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values. ### Supported Tasks and Leaderboards `text-generation` with the 'Raw text column'. `summarization` as for getting from all the info the header. `multiple-choice` as for the hashtags (to choose multiple from all available in the dataset) ### Languages The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`. ## Dataset Structure ### Data Instances The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/). An example from the set looks as follows: ``` {'Header': 'Разработчик интерфейсов в группу разработки спецпроектов', 'Emoji': '🎳', 'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.', 'Requirements': '• отлично знаете JavaScript • разрабатывали на Node.js, применяли фреймворк Express • умеете создавать веб-приложения на React + Redux • знаете HTML и CSS, особенности их отображения в браузерах', 'Tasks': '• разрабатывать интерфейсы', 'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты • умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене • работали с реляционными БД PostgreSQL', 'Hashtags': '#фронтенд #турбо #JS', 'Link': 'https://ya.cc/t/t7E3UsmVSKs6L', 'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳 Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика. Мы ищем опытного и открытого новому фронтенд-разработчика. Мы ждем, что вы: • отлично знаете JavaScript • разрабатывали на Node.js, применяли фреймворк Express • умеете создавать веб-приложения на React + Redux • знаете HTML и CSS, особенности их отображения в браузерах Что нужно делать: • разрабатывать интерфейсы Будет плюсом, если вы: • писали интеграционные, модульные, функциональные или браузерные тесты • умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене • работали с реляционными БД PostgreSQL https://ya.cc/t/t7E3UsmVSKs6L #фронтенд #турбо #JS' } ``` ### Data Fields - `Header`: A string with a position title (str) - `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str) - `Description`: Short description of the vacancy (str) - `Requirements`: A couple of required technologies/programming languages/experience (str) - `Tasks`: Examples of the tasks of the job position (str) - `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc) - `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str) - `Link`: A link to a job description (there may be more information, but it is not checked) (str) - `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str) ### Data Splits There is not enough examples yet to split it to train/test/val in my opinion. ## Dataset Creation It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links) ## Considerations for Using the Data These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies. ## Contributions - **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik)
Kirili4ik/yandex_jobs
[ "task_categories:text-generation", "task_categories:summarization", "task_categories:multiple-choice", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ru", "license:unknown", "vacancies", "jobs", "ru", "yandex", "region:us" ]
2022-09-03T16:22:02+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ru"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "summarization", "multiple-choice"], "task_ids": ["language-modeling"], "paperswithcode_id": "climate-fever", "pretty_name": "yandex_jobs", "tags": ["vacancies", "jobs", "ru", "yandex"]}
2022-09-03T16:55:00+00:00
ee8774c4c8a9c7812856f14bdefecab8fe1576d3
### Abstract Social tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant. ### Content This dataset was first published in LREC 2018 at Miyazaki, Japan. Please find the paper here: ![MPST: A Corpus of Movie Plot Synopses with Tags](https://aclanthology.org/L18-1274.pdf) Later, this dataset was enriched with user reviews. The paper is available here: ![Multi-view Story Characterization from Movie Plot Synopses and Reviews](https://aclanthology.org/2020.emnlp-main.454.pdf) This dataset was published in EMNLP 2020. ### Keywords Tag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts More information is available here http://ritual.uh.edu/mpst-2018/ Please cite the following papers if you use this dataset: ``` @InProceedings{KAR18.332, author = {Sudipta Kar and Suraj Maharjan and A. Pastor López-Monroy and Thamar Solorio}, title = {{MPST}: A Corpus of Movie Plot Synopses with Tags}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {May}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} } ``` ``` @inproceedings{kar-etal-2020-multi, title = "Multi-view Story Characterization from Movie Plot Synopses and Reviews", author = "Kar, Sudipta and Aguilar, Gustavo and Lapata, Mirella and Solorio, Thamar", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.454", doi = "10.18653/v1/2020.emnlp-main.454", pages = "5629--5646", abstract = "This paper considers the problem of characterizing stories by inferring properties such as theme and style using written synopses and reviews of movies. We experiment with a multi-label dataset of movie synopses and a tagset representing various attributes of stories (e.g., genre, type of events). Our proposed multi-view model encodes the synopses and reviews using hierarchical attention and shows improvement over methods that only use synopses. Finally, we demonstrate how we can take advantage of such a model to extract a complementary set of story-attributes from reviews without direct supervision. We have made our dataset and source code publicly available at https://ritual.uh.edu/multiview-tag-2020.", } ```
cryptexcode/MPST
[ "license:cc-by-4.0", "region:us" ]
2022-09-03T17:44:29+00:00
{"license": "cc-by-4.0"}
2022-09-03T19:43:00+00:00
cc63b218e0ec1fd354b4c094d3dc7be65e1a858a
# Dataset Card for "twowaydata" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
marcus2000/twowaydata
[ "region:us" ]
2022-09-03T21:01:38+00:00
{"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26052853, "num_examples": 33014}, {"name": "validation", "num_bytes": 3144818, "num_examples": 4000}, {"name": "test", "num_bytes": 3374221, "num_examples": 4254}], "download_size": 14113023, "dataset_size": 32571892}}
2023-02-23T19:13:45+00:00
0adcd5a08b689305c0dae8cf2c75c0bce419072a
# Dataset Card for LibriVox Indonesia 1.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Point of Contact:** [Cahya Wirawan](mailto:[email protected]) ### Dataset Summary The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset. The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it for other languages without additional work to train the model. The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files as we collect them. ### Languages ``` Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `reader` and `language`. ```python { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100 }, } ``` ### Data Fields `path` (`string`): The path to the audio file `language` (`string`): The language of the audio file `reader` (`string`): The reader Id in LibriVox `sentence` (`string`): The sentence the user read from the book. `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. ### Data Splits The speech material has only train split. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` ```
indonesian-nlp/librivox-indonesia
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:librivox", "language:ace", "language:ban", "language:bug", "language:ind", "language:min", "language:jav", "language:sun", "license:cc", "region:us" ]
2022-09-03T23:13:16+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ace", "ban", "bug", "ind", "min", "jav", "sun"], "license": "cc", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["librivox"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "LibriVox Indonesia 1.0"}
2024-02-01T20:55:53+00:00
01747f9e3b36fb579319d40898936edcd1a2a6af
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-76e071-15436137
[ "autotrain", "evaluation", "region:us" ]
2022-09-03T23:20:39+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-04T19:49:44+00:00
c5eeea30aae0f63dcdad307f32e4009865949f14
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fd18e2-15446138
[ "autotrain", "evaluation", "region:us" ]
2022-09-03T23:20:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-04T19:46:25+00:00
31825c0782fc7a127974c4b9bbdbc9a94a76fbdc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-8aef96-15456139
[ "autotrain", "evaluation", "region:us" ]
2022-09-03T23:49:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-04T20:11:30+00:00
0d2ac8812872b678eb58191d0bf31a5d291c3759
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-25032a-15466140
[ "autotrain", "evaluation", "region:us" ]
2022-09-03T23:49:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-04T20:07:41+00:00
72c2361371b0b7483028f438a82af75b3554d689
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-096051-15476141
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T00:36:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-04T01:25:02+00:00
ad3dd0050b0c4d75e84eeaad39020c9499a4c0ce
This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch) The dataset have five category.(experience education knowledge project others ) And three element label(header content meta). Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite BibTex. @article{甘程光2021英文履歴書データ抽出システムへの, title={英文履歴書データ抽出システムへの BERT 適用性の検討}, author={甘程光 and 高橋良英 and others}, journal={2021 年度 情報処理学会関西支部 支部大会 講演論文集}, volume={2021}, year={2021} }
ganchengguang/resume-5label-classification
[ "region:us" ]
2022-09-04T01:37:54+00:00
{}
2022-09-04T01:53:22+00:00
f119500feb836ba3656b0fb9aa6b5291f53c92e9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-xsum-default-a80438-15496142
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T01:39:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-04T02:28:51+00:00
f7f6abf17cdb0a878c12cc9bca448a2cb710357f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-01441a-15506143
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T01:39:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-04T02:30:03+00:00
1c22f0a860b96cf9f817b5718d16980e68000d95
Laasya/civis-consultation-summaries
[ "license:other", "region:us" ]
2022-09-04T06:48:58+00:00
{"license": "other"}
2022-09-04T06:52:15+00:00
7e53c29bdeff7c789c6e250abfcf98a55ff810f8
SamAct/medium_cleaned
[ "license:unlicense", "doi:10.57967/hf/0731", "region:us" ]
2022-09-04T07:27:39+00:00
{"license": "unlicense"}
2022-09-04T07:32:11+00:00
2769ee8f634047148254ef2e6bd0aa0241d77c79
pedramyamini/ku_radaw_news
[ "license:afl-3.0", "region:us" ]
2022-09-04T09:34:39+00:00
{"license": "afl-3.0"}
2023-10-05T03:05:49+00:00
d8da37c6401feb23c939245046f08ea4b1ad4f94
# Dataset Card for lener_br_text_to_lm ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/). The legal texts were obtained from the original token classification Hugging Face LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create a DatasetDict with train and validation dataset (20%). The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau base and large. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 8316 }) test: Dataset({ features: ['text'], num_rows: 2079 }) }) ``` ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
Luciano/lener_br_text_to_lm
[ "task_categories:fill-mask", "task_categories:text-generation", "task_ids:masked-language-modeling", "task_ids:language-modeling", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:pt", "region:us" ]
2022-09-04T09:36:21+00:00
{"annotations_creators": [], "language_creators": [], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).\n\nThe legal texts were obtained from the original token classification Hugging Face LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create a DatasetDict with train and validation dataset (20%).\n\nThe LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau base and large.", "tags": []}
2022-09-04T10:32:31+00:00
9e09fd3f93f3102e35dc67bdcb0d2669d5f93168
gaurikapse/civis-consultation-summaries
[ "task_categories:summarization", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:other", "legal", "indian", "government", "policy", "consultations", "region:us" ]
2022-09-04T09:55:11+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "civis-consultation-summaries", "tags": ["legal", "indian", "government", "policy", "consultations"]}
2022-09-04T17:05:08+00:00
b9e657fd54956571c5ff5c578a8fb1d3a4e854bd
haritzpuerto/MetaQA_Datasets
[ "license:apache-2.0", "region:us" ]
2022-09-04T14:42:01+00:00
{"license": "apache-2.0"}
2022-09-04T14:42:01+00:00
2636f596c4acb3c8832f51a7048f02b117226453
# Dataset Card for MetaQA Agents' Predictions ## Dataset Description - **Repository:** [MetaQA's Repository](https://github.com/UKPLab/MetaQA) - **Paper:** [MetaQA: Combining Expert Agents for Multi-Skill Question Answering](https://arxiv.org/abs/2112.01922) - **Point of Contact:** [Haritz Puerto](mailto:[email protected]) ## Dataset Summary This dataset contains the answer predictions of the QA agents for the [QA datasets](https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets) used in [MetaQA paper](https://arxiv.org/abs/2112.01922). In particular, it contains the following QA agents' predictions: ### Span-Extraction Agents - Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP - Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for: - SQuAD - NewsQA - HotpotQA - SearchQA - Natural Questions - TriviaQA-web - QAMR - DuoRC - DROP ### Multiple-Choice Agents - Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for: - RACE - Commonsense QA - BoolQ - HellaSWAG - Social IQA - Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for: - BoolQ ### Abstractive Agents - Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for: - DROP - Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for: - NarrativeQA ### Multimodal Agents - Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for: - HybridQA ### Languages All the QA datasets used English and thus, the Agents's predictions are also in English. ## Dataset Structure Each agent has a folder. Inside, there is a folder for each dataset containing four files: - predict_nbest_predictions.json - predict_predictions.json / predictions.json - predict_results.json (for span-extraction agents) ### Structure of predict_nbest_predictions.json ``` {id: [{"start_logit": ..., "end_logit": ..., "text": ..., "probability": ... }]} ``` ### Structure of predict_predictions.json ``` {id: answer_text} ``` ### Data Splits All the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models. ### Discussion of Biases The QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models. ## Additional Information ### License The MetaQA Agents' prediction dataset version is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation ``` @article{Puerto2021MetaQACE, title={MetaQA: Combining Expert Agents for Multi-Skill Question Answering}, author={Haritz Puerto and Gozde Gul cSahin and Iryna Gurevych}, journal={ArXiv}, year={2021}, volume={abs/2112.01922} } ```
haritzpuerto/MetaQA_Agents_Predictions
[ "task_categories:question-answering", "multilinguality:monolingual", "source_datasets:mrqa", "source_datasets:duorc", "source_datasets:qamr", "source_datasets:boolq", "source_datasets:commonsense_qa", "source_datasets:hellaswag", "source_datasets:social_i_qa", "source_datasets:narrativeqa", "language:en", "license:apache-2.0", "multi-agent question answering", "multi-agent QA", "predictions", "arxiv:2112.01922", "region:us" ]
2022-09-04T14:50:38+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["mrqa", "duorc", "qamr", "boolq", "commonsense_qa", "hellaswag", "social_i_qa", "narrativeqa"], "task_categories": ["question-answering"], "task_ids": [], "paperswithcode_id": "metaqa-combining-expert-agents-for-multi", "pretty_name": "MetaQA Agents' Predictions", "tags": ["multi-agent question answering", "multi-agent QA", "predictions"]}
2022-09-04T19:16:51+00:00
a189eae9498de2ace8b54290c3f94b7286a4c7c2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-0e4017-15526144
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T15:42:49+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-04T15:46:04+00:00
9d4e8f919e11525f564bd99fdfa71164b26c299a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-a4ff98-15536145
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T15:42:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-04T15:46:49+00:00
02de1f4f6049b8d7f53d924789fbf67aa5244139
KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, [allenai/nllb](https://huggingface.co/datasets/allenai/nllb) each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup detail soon
acul3/KoPI-NLLB
[ "region:us" ]
2022-09-04T15:52:01+00:00
{}
2022-09-06T04:49:03+00:00
654c7c822d4e30e593b84c0d17ffe8f5415596d5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen1234/testing * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-70f55d-15546146
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T17:24:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen1234/testing", "metrics": ["rouge", "mse", "mae", "squad"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-04T17:28:25+00:00
df39f858b9b08963848eeab993371aefa449f435
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-85416c-15556147
[ "autotrain", "evaluation", "region:us" ]
2022-09-04T17:24:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["rouge", "mse", "mae", "squad"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-04T17:27:44+00:00
e7ba41ad9c6e214c72e33639393fcb300187a5e4
gaurikapse/civis-consultations-transposed-data
[ "license:other", "region:us" ]
2022-09-04T17:43:33+00:00
{"license": "other"}
2022-09-04T17:45:18+00:00
57f02e50acc848309ad50777cc8988752d19b5d7
namban/ledgar
[ "license:afl-3.0", "region:us" ]
2022-09-04T19:00:44+00:00
{"license": "afl-3.0"}
2022-09-04T19:00:44+00:00
9824a87c0f39341c8a4427e6c8778ef59c5fa5c3
gandinaalikekeede/ledgar_cleaner
[ "license:afl-3.0", "region:us" ]
2022-09-04T19:06:23+00:00
{"license": "afl-3.0"}
2022-09-04T19:12:30+00:00
0c95d910357f5e262bd04790e5122eda781573fe
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model.
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-00af64-15586150
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T01:39:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-09-05T01:42:07+00:00
bb02409110bba66779b85f0271cef0f482f04404
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-175281-15596151
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T02:42:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-05T02:46:20+00:00
a63bf346e599e6796a015f39c17baa988b9e9f7e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-41c5cd-15606152
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T02:42:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-05T02:46:21+00:00
3cb8c00aa2e79441a8358d44e42652bc6c90e10a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-cc5bdf-15616153
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T02:42:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-05T02:47:47+00:00
8c35b13454d43f2319e368f1fe7c97a878af4c46
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br](https://huggingface.co/https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br) for evaluating this model.
autoevaluate/autoeval-staging-eval-lener_br-lener_br-f0f34b-15626154
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T04:06:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-09-05T04:09:08+00:00
4022c7affe48f8cf58cc541414c0a35a5eadd6d8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-e82d51-15636155
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T05:33:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse", "mae"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-05T05:37:40+00:00
bb6df4b8fdcd1576302511620ad6a8465e13fb39
victor/synthetic-donuts
[ "license:mit", "region:us" ]
2022-09-05T07:05:51+00:00
{"license": "mit"}
2022-09-05T07:05:51+00:00
e0b1e4d497fe81cad3e4695ae1c6c5ca7d64656d
# AutoTrain Dataset for project: satellite-image-classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project satellite-image-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 CMYK PIL image>", "target": 0 }, { "image": "<256x256 CMYK PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1200 | | valid | 300 |
victor/autotrain-data-satellite-image-classification
[ "task_categories:image-classification", "region:us" ]
2022-09-05T07:58:49+00:00
{"task_categories": ["image-classification"]}
2022-09-05T08:30:13+00:00
93c9ef572004a518c936aa13d9afbfd05b710aea
NOTE: All this data, plus a lot more, is now accessible at https://console.cloud.google.com/marketplace/product/bigquery-public-data/eumetsat-seviri-rss-hrv-uk?project=tactile-acrobat-249716 That dataset is the preferred way to access this data, as it goes back to the beginning of the RSS archive (2008-2023) and is updated on a roughly weekly basis. This dataset consists of the EUMETSAT Rapid Scan Service (RSS) imagery for 2014 to Feb 2023. This data has 2 formats, the High Resolution Visible channel (HRV) which covers Europe and North Africa at a resolution of roughly 2-3km per pixel, and is shifted each day to better image where the sun is shining, and the non-HRV data, which is comprised of 11 spectral channels at a 6-9km resolution covering the top third of the Earth centered on Europe. These images are taken 5 minutes apart and have been compressed and stacked into Zarr stores. Using Xarray, these files can be opened all together to create one large Zarr store of HRV or non-HRV imagery.
openclimatefix/eumetsat-rss
[ "size_categories:1K<n<10K", "license:other", "climate", "doi:10.57967/hf/1488", "region:us" ]
2022-09-05T08:25:53+00:00
{"license": "other", "size_categories": ["1K<n<10K"], "tags": ["climate"]}
2024-02-17T17:37:41+00:00
0d5751865d26618e2141fe0aecf06477d93d0955
# ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for classification if a sentence is ADE-related (True) or not (False). **Train size: 17,637** **Test size: 5,879** [Source dataset](https://huggingface.co/datasets/ade_corpus_v2) [Paper](https://www.sciencedirect.com/science/article/pii/S1532046412000615)
SetFit/ade_corpus_v2_classification
[ "region:us" ]
2022-09-05T10:20:19+00:00
{}
2022-09-05T13:14:53+00:00
432cc594adf4bf4f47d7e3bfbf32b7c51608eeae
Osaleh/NE_ArSAS
[ "license:afl-3.0", "region:us" ]
2022-09-05T10:50:27+00:00
{"license": "afl-3.0"}
2022-09-05T10:52:06+00:00
409ea09f4f1af6cd28bdd26694f3f8aa679f6120
mteb/mteb-example-submission
[ "benchmark:mteb", "region:us" ]
2022-09-05T10:53:22+00:00
{"benchmark": "mteb", "type": "evaluation"}
2022-09-05T18:25:39+00:00
a6532be4f02ca12a871ba4910dc2b72e7b3cf4e2
asaxena1990/datasetpreview
[ "license:cc-by-sa-4.0", "region:us" ]
2022-09-05T11:17:21+00:00
{"license": "cc-by-sa-4.0"}
2022-09-05T11:18:05+00:00
698f0d0c15fbc15ca98d8757c294f397c5254a6a
asaxena1990/datasetpreviewcsv
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-09-05T11:40:50+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-09-05T11:51:14+00:00
6df1024387c78af81538a7223c70a8101c61d6aa
# Dataset Card for Europarl v7 (en-it split) This dataset contains only the English-Italian split of Europarl v7. We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students. For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/) ## Dataset Structure ### Data Fields - sent_en: English transcript - sent_it: Italian translation ### Data Splits We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. - train (1717204 pairs) - validation (190911 pairs) - test (1000 pairs) ### Citation Information If using the dataset, please cite: `Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).` ### Contributions Thanks to [@g8a9](https://github.com/g8a9) for adding this dataset.
g8a9/europarl_en-it
[ "task_categories:translation", "multilinguality:monolingual", "multilinguality:translation", "language:en", "language:it", "license:unknown", "region:us" ]
2022-09-05T12:53:46+00:00
{"language": ["en", "it"], "license": ["unknown"], "multilinguality": ["monolingual", "translation"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "Europarl v7 (en-it split)", "tags": []}
2022-09-07T09:14:04+00:00
ffb979b8a8247b442ec3adcf5fb83d3fff562f55
# Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage ``` from datasets import load_dataset dataset = load_dataset("batterydata/battery-device-data-qa") ``` Note: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below: ``` import json with open("answers.json", "r", encoding='utf-8') as f: data = json.load(f) evaluation = [] for point in data['data']: paragraphs = point['paragraphs'][0]['context'] if len(paragraphs)<1500: qas = point['paragraphs'][0]['qas'] for indiv in qas: try: question = indiv['question'] answer = indiv['answers'][0]['text'] pairs = (paragraphs, question, answer) evaluation.append(pairs) except: continue ``` # Citation ``` @article{huang2022batterybert, title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement}, author={Huang, Shu and Cole, Jacqueline M}, journal={J. Chem. Inf. Model.}, year={2022}, doi={10.1021/acs.jcim.2c00035}, url={DOI:10.1021/acs.jcim.2c00035}, pages={DOI: 10.1021/acs.jcim.2c00035}, publisher={ACS Publications} } ```
batterydata/battery-device-data-qa
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
2022-09-05T14:30:32+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["question-answering"], "pretty_name": "Battery Device Question Answering Dataset"}
2023-11-06T12:50:19+00:00
cbed321f16868443449817bad5f6ef18b64030e7
# diffusers metrics This dataset contains metrics about the huggingface/diffusers package. Number of repositories in the dataset: 160 Number of packages in the dataset: 2 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/diffusers/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![diffusers-dependent package star count](./diffusers-dependents/resolve/main/diffusers-dependent_package_star_count.png) | ![diffusers-dependent repository star count](./diffusers-dependents/resolve/main/diffusers-dependent_repository_star_count.png) There are 0 packages that have more than 1000 stars. There are 3 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121 [samedii/perceptor](https://github.com/samedii/perceptor): 1 *Repository* [gradio-app/gradio](https://github.com/gradio-app/gradio): 9168 [divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 4264 [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 3527 [bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 925 [nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 899 [sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 360 [Eventual-Inc/Daft](https://github.com/Eventual-Inc/Daft): 251 [JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121 [GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 113 [brycedrennan/imaginAIry](https://github.com/brycedrennan/imaginAIry): 104 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![diffusers-dependent package forks count](./diffusers-dependents/resolve/main/diffusers-dependent_package_forks_count.png) | ![diffusers-dependent repository forks count](./diffusers-dependents/resolve/main/diffusers-dependent_repository_forks_count.png) There are 0 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* *Repository* [gradio-app/gradio](https://github.com/gradio-app/gradio): 574 [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 377 [bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 108 [divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 96 [nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 73 [GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 34 [sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 29 [coreweave/kubernetes-cloud](https://github.com/coreweave/kubernetes-cloud): 20 [bananaml/serverless-template-stable-diffusion](https://github.com/bananaml/serverless-template-stable-diffusion): 15 [AmericanPresidentJimmyCarter/yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot): 9 [NickLucche/stable-diffusion-nvidia-docker](https://github.com/NickLucche/stable-diffusion-nvidia-docker): 9 [vopani/waveton](https://github.com/vopani/waveton): 9 [harubaru/discord-stable-diffusion](https://github.com/harubaru/discord-stable-diffusion): 9
open-source-metrics/diffusers-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:31:32+00:00
{"license": "apache-2.0", "pretty_name": "diffusers metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 2680, "num_examples": 62}, {"name": "repository", "num_bytes": 92837, "num_examples": 1976}], "download_size": 55374, "dataset_size": 95517}}
2024-02-16T22:46:05+00:00
91df9fbf9146c843ed3ab32c72fa64ba6b34a28f
# accelerate metrics This dataset contains metrics about the huggingface/accelerate package. Number of repositories in the dataset: 727 Number of packages in the dataset: 37 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/accelerate/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![accelerate-dependent package star count](./accelerate-dependents/resolve/main/accelerate-dependent_package_star_count.png) | ![accelerate-dependent repository star count](./accelerate-dependents/resolve/main/accelerate-dependent_repository_star_count.png) There are 10 packages that have more than 1000 stars. There are 16 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [fastai/fastai](https://github.com/fastai/fastai): 22774 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674 [kornia/kornia](https://github.com/kornia/kornia): 7103 [facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 6548 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985 [lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 1727 [abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 1101 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [google-research/google-research](https://github.com/google-research/google-research): 25092 [ray-project/ray](https://github.com/ray-project/ray): 22047 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674 [kornia/kornia](https://github.com/kornia/kornia): 7103 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113 [wandb/wandb](https://github.com/wandb/wandb): 4738 [skorch-dev/skorch](https://github.com/skorch-dev/skorch): 4679 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![accelerate-dependent package forks count](./accelerate-dependents/resolve/main/accelerate-dependent_package_forks_count.png) | ![accelerate-dependent repository forks count](./accelerate-dependents/resolve/main/accelerate-dependent_repository_forks_count.png) There are 9 packages that have more than 200 forks. There are 16 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [fastai/fastai](https://github.com/fastai/fastai): 7297 [facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 975 [kornia/kornia](https://github.com/kornia/kornia): 723 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582 [huggingface/diffusers](https://github.com/huggingface/diffusers): 490 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366 [lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 235 [abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 136 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [google-research/google-research](https://github.com/google-research/google-research): 6139 [ray-project/ray](https://github.com/ray-project/ray): 3876 [roatienza/Deep-Learning-Experiments](https://github.com/roatienza/Deep-Learning-Experiments): 729 [kornia/kornia](https://github.com/kornia/kornia): 723 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582 [huggingface/diffusers](https://github.com/huggingface/diffusers): 490 [nlp-with-transformers/notebooks](https://github.com/nlp-with-transformers/notebooks): 436 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366
open-source-metrics/accelerate-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:32:37+00:00
{"license": "apache-2.0", "pretty_name": "accelerate metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 4874, "num_examples": 116}, {"name": "repository", "num_bytes": 162350, "num_examples": 3488}], "download_size": 100048, "dataset_size": 167224}}
2024-02-16T19:02:17+00:00
7fb91ea38e6b089b6488c0648b92a9f80f5f6594
# evaluate metrics This dataset contains metrics about the huggingface/evaluate package. Number of repositories in the dataset: 106 Number of packages in the dataset: 3 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/evaluate/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![evaluate-dependent package star count](./evaluate-dependents/resolve/main/evaluate-dependent_package_star_count.png) | ![evaluate-dependent repository star count](./evaluate-dependents/resolve/main/evaluate-dependent_repository_star_count.png) There are 1 packages that have more than 1000 stars. There are 2 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/accelerate](https://github.com/huggingface/accelerate): 2884 [fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 4 [entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 2 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70481 [huggingface/accelerate](https://github.com/huggingface/accelerate): 2884 [huggingface/evaluate](https://github.com/huggingface/evaluate): 878 [pytorch/benchmark](https://github.com/pytorch/benchmark): 406 [imhuay/studies](https://github.com/imhuay/studies): 161 [AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 128 [Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 32 [philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20 [hms-dbmi/scw](https://github.com/hms-dbmi/scw): 19 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15 [lewtun/dl4phys](https://github.com/lewtun/dl4phys): 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![evaluate-dependent package forks count](./evaluate-dependents/resolve/main/evaluate-dependent_package_forks_count.png) | ![evaluate-dependent repository forks count](./evaluate-dependents/resolve/main/evaluate-dependent_repository_forks_count.png) There are 1 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/accelerate](https://github.com/huggingface/accelerate): 224 [fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 0 [entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 0 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [huggingface/accelerate](https://github.com/huggingface/accelerate): 224 [pytorch/benchmark](https://github.com/pytorch/benchmark): 131 [Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 67 [huggingface/evaluate](https://github.com/huggingface/evaluate): 48 [imhuay/studies](https://github.com/imhuay/studies): 42 [AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 14 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14 [hms-dbmi/scw](https://github.com/hms-dbmi/scw): 11 [kili-technology/automl](https://github.com/kili-technology/automl): 5 [whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5
open-source-metrics/evaluate-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:33:19+00:00
{"license": "apache-2.0", "pretty_name": "evaluate metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 1830, "num_examples": 45}, {"name": "repository", "num_bytes": 54734, "num_examples": 1161}], "download_size": 37570, "dataset_size": 56564}}
2024-02-16T18:19:33+00:00
a70617c7ceb76742b60748626733a425d6aad03a
# optimum metrics This dataset contains metrics about the huggingface/optimum package. Number of repositories in the dataset: 19 Number of packages in the dataset: 6 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/optimum/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![optimum-dependent package star count](./optimum-dependents/resolve/main/optimum-dependent_package_star_count.png) | ![optimum-dependent repository star count](./optimum-dependents/resolve/main/optimum-dependent_repository_star_count.png) There are 0 packages that have more than 1000 stars. There are 0 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24 [bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 10 *Repository* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288 [marqo-ai/marqo](https://github.com/marqo-ai/marqo): 265 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114 [graphcore/tutorials](https://github.com/graphcore/tutorials): 65 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24 [philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![optimum-dependent package forks count](./optimum-dependents/resolve/main/optimum-dependent_package_forks_count.png) | ![optimum-dependent repository forks count](./optimum-dependents/resolve/main/optimum-dependent_repository_forks_count.png) There are 0 packages that have more than 200 forks. There are 0 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3 [bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 2 *Repository* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82 [graphcore/tutorials](https://github.com/graphcore/tutorials): 33 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10 [marqo-ai/marqo](https://github.com/marqo-ai/marqo): 6 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6 [whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 3 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
open-source-metrics/optimum-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:33:37+00:00
{"license": "apache-2.0", "pretty_name": "optimum metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 555, "num_examples": 13}, {"name": "repository", "num_bytes": 3790, "num_examples": 81}], "download_size": 6617, "dataset_size": 4345}}
2024-02-16T20:08:08+00:00
3baed3ff5e5357ef7362130470d47ca0fb92f29b
# tokenizers metrics This dataset contains metrics about the huggingface/tokenizers package. Number of repositories in the dataset: 11460 Number of packages in the dataset: 124 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/tokenizers/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![tokenizers-dependent package star count](./tokenizers-dependents/resolve/main/tokenizers-dependent_package_star_count.png) | ![tokenizers-dependent repository star count](./tokenizers-dependents/resolve/main/tokenizers-dependent_repository_star_count.png) There are 14 packages that have more than 1000 stars. There are 41 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70475 [hankcs/HanLP](https://github.com/hankcs/HanLP): 26958 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9439 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8461 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 3303 [neuml/txtai](https://github.com/neuml/txtai): 2530 [QData/TextAttack](https://github.com/QData/TextAttack): 2087 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1981 [utterworks/fast-bert](https://github.com/utterworks/fast-bert): 1760 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [hankcs/HanLP](https://github.com/hankcs/HanLP): 26958 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14842 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [gradio-app/gradio](https://github.com/gradio-app/gradio): 9169 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8462 [microsoft/unilm](https://github.com/microsoft/unilm): 6650 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo): 6431 [moyix/fauxpilot](https://github.com/moyix/fauxpilot): 6300 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![tokenizers-dependent package forks count](./tokenizers-dependents/resolve/main/tokenizers-dependent_package_forks_count.png) | ![tokenizers-dependent repository forks count](./tokenizers-dependents/resolve/main/tokenizers-dependent_repository_forks_count.png) There are 11 packages that have more than 200 forks. There are 39 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16158 [hankcs/HanLP](https://github.com/hankcs/HanLP): 7388 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 543 [utterworks/fast-bert](https://github.com/utterworks/fast-bert): 336 [nyu-mll/jiant](https://github.com/nyu-mll/jiant): 273 [QData/TextAttack](https://github.com/QData/TextAttack): 269 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 245 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [hankcs/HanLP](https://github.com/hankcs/HanLP): 7388 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4105 [plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695 [microsoft/unilm](https://github.com/microsoft/unilm): 1223 [openvinotoolkit/open_model_zoo](https://github.com/openvinotoolkit/open_model_zoo): 1207 [bhaveshlohana/HacktoberFest2020-Contributions](https://github.com/bhaveshlohana/HacktoberFest2020-Contributions): 1020 [data-science-on-aws/data-science-on-aws](https://github.com/data-science-on-aws/data-science-on-aws): 884
open-source-metrics/tokenizers-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:34:23+00:00
{"license": "apache-2.0", "pretty_name": "tokenizers metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 95, "num_examples": 2}, {"name": "repository", "num_bytes": 1893, "num_examples": 42}], "download_size": 5046, "dataset_size": 1988}}
2024-02-16T22:31:58+00:00
f90059cc985dd576947151f36883ca3607f2a195
# datasets metrics This dataset contains metrics about the huggingface/datasets package. Number of repositories in the dataset: 4997 Number of packages in the dataset: 215 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/datasets/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![datasets-dependent package star count](./datasets-dependents/resolve/main/datasets-dependent_package_star_count.png) | ![datasets-dependent repository star count](./datasets-dependents/resolve/main/datasets-dependent_repository_star_count.png) There are 22 packages that have more than 1000 stars. There are 43 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [fastai/fastbook](https://github.com/fastai/fastbook): 16052 [jina-ai/jina](https://github.com/jina-ai/jina): 16052 [borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12873 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [huggingface/tokenizers](https://github.com/huggingface/tokenizers): 5867 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 5422 [HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 4058 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70481 [google-research/google-research](https://github.com/google-research/google-research): 25092 [ray-project/ray](https://github.com/ray-project/ray): 22047 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [gradio-app/gradio](https://github.com/gradio-app/gradio): 9169 [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 7343 [microsoft/unilm](https://github.com/microsoft/unilm): 6650 [deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 5844 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![datasets-dependent package forks count](./datasets-dependents/resolve/main/datasets-dependent_package_forks_count.png) | ![datasets-dependent repository forks count](./datasets-dependents/resolve/main/datasets-dependent_repository_forks_count.png) There are 17 packages that have more than 200 forks. There are 40 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [fastai/fastbook](https://github.com/fastai/fastbook): 6033 [allenai/allennlp](https://github.com/allenai/allennlp): 2218 [jina-ai/jina](https://github.com/jina-ai/jina): 1967 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583 [HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 988 [borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 945 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658 [huggingface/tokenizers](https://github.com/huggingface/tokenizers): 502 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [google-research/google-research](https://github.com/google-research/google-research): 6139 [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493 [ray-project/ray](https://github.com/ray-project/ray): 3876 [allenai/allennlp](https://github.com/allenai/allennlp): 2218 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583 [x4nth055/pythoncode-tutorials](https://github.com/x4nth055/pythoncode-tutorials): 1435 [microsoft/unilm](https://github.com/microsoft/unilm): 1223 [deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 1055
open-source-metrics/datasets-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:38:22+00:00
{"license": "apache-2.0", "pretty_name": "datasets metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 15485, "num_examples": 376}, {"name": "repository", "num_bytes": 503612, "num_examples": 10931}], "download_size": 310753, "dataset_size": 519097}}
2024-02-16T20:05:31+00:00