sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
2c0ff370938b073a6e0e894789f0697c701e4f3d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: morenolq/distilbert-base-cased-emotion
* Dataset: emotion
* Config: default
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model. | autoevaluate/autoeval-eval-emotion-default-f266e6-1508354838 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:17:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "morenolq/distilbert-base-cased-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-19T13:17:45+00:00 |
675263df9cdf386ecb16016c1434cf90108914d5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-rte
* Dataset: glue
* Config: rte
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-rte-157f21-1508454839 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:17:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-rte", "metrics": [], "dataset_name": "glue", "dataset_config": "rte", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-09-19T13:17:54+00:00 |
a4302a5208a75bd5eafff39c433c0073cf7b649e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-qqp-b620ce-1508754840 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:17:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}} | 2022-09-19T13:20:34+00:00 |
e16f043921522ca6271d5174bfdc22889c7b446e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-mnli
* Dataset: glue
* Config: mnli_matched
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-mnli_matched-c9e0cb-1508854842 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:17:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-mnli", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli_matched", "dataset_split": "validation", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-09-19T13:18:46+00:00 |
400174f5e633d5a97f599969362628c5b028794f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JeremiahZ/roberta-base-cola
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-cola-b911f0-1508954843 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:48:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "multi_class_classification", "model": "JeremiahZ/roberta-base-cola", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-09-19T13:49:27+00:00 |
9509b6529ed2a785841e86bf1637353291e8ddab | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: JeremiahZ/bert-base-uncased-cola
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-cola-b911f0-1508954844 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:48:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "multi_class_classification", "model": "JeremiahZ/bert-base-uncased-cola", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-09-19T13:49:28+00:00 |
5b7b1e9a55331e18543b14c0ba25aaf38985337a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/roberta-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054845 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:49:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/roberta-base-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-09-19T13:49:33+00:00 |
bf06c398b669a4cb58387c071e8e4bf84eefd64f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/bert-base-uncased-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model. | autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054846 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T13:49:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-09-19T13:49:35+00:00 |
148a5dacde77aa5e337fdfaf0afbe75586dc86f9 | j0hngou/ccmatrix_en-it | [
"language:en",
"language:it",
"region:us"
] | 2022-09-19T15:33:17+00:00 | {"language": ["en", "it"]} | 2022-09-26T15:34:54+00:00 |
|
e992f84dd6d471143439e0a111e3b9d73ebc5f3a |
GAMa (Ground-video to Aerial-image Matching) dataset
Download at:
https://www.crcv.ucf.edu/data1/GAMa/
# GAMa: Cross-view Video Geo-localization
by [Shruti Vyas](https://scholar.google.com/citations?user=15YqUQUAAAAJ&hl=en); [Chen Chen](https://scholar.google.com/citations?user=TuEwcZ0AAAAJ&hl=en); [Mubarak Shah](https://scholar.google.com/citations?user=p8gsO3gAAAAJ&hl=en)
code at: https://github.com/svyas23/GAMa/blob/main/README.md
| svyas23/GAMa | [
"license:other",
"region:us"
] | 2022-09-19T16:17:00+00:00 | {"license": "other"} | 2022-09-19T16:34:14+00:00 |
7a7dd4cba7ff2944ded877a9b7064723698c2b6f | Impe/Stuff | [
"license:afl-3.0",
"region:us"
] | 2022-09-19T16:31:51+00:00 | {"license": "afl-3.0"} | 2022-09-19T16:31:51+00:00 |
|
7513b19b0b0283fcf2bf8e537f1fc6cba04250fe |
# Dataset Card for G-KOMET
### Dataset Summary
G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.
It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in [KOMET](https://huggingface.co/datasets/cjvt/komet), where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this.
### Supported Tasks and Leaderboards
Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'document_name': 'G-Komet001.xml',
'idx': 3,
'idx_paragraph': 0,
'idx_sentence': 3,
'sentence_words': ['no', 'zdaj', 'samo', 'še', 'za', 'eno', 'orientacijo'],
'met_type': [
{'type': 'MRWi', 'word_indices': [6]}
],
'met_frame': [
{'type': 'spatial_orientation', 'word_indices': [6]}
]
}
```
The sentence comes from the document `G-Komet001.xml`, is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document.
The word "orientacijo" is annotated as an indirect metaphor-related word (`MRWi`).
It is also annotated with the frame "spatial_orientation".
### Data Fields
- `document_name`: a string containing the name of the document in which the sentence appears;
- `idx`: a uint32 containing the index of the sentence inside its document;
- `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears;
- `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph;
containing the consecutive number of the paragraph inside the current news article;
- `sentence_words`: words in the sentence;
- `met_type`: metaphors in the sentence, marked by their type and word indices;
- `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices.
## Dataset Creation
The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else.
For more information, please check out the paper (which is in Slovenian language) or contact the dataset author.
## Additional Information
### Dataset Curators
Špela Antloga.
### Licensing Information
CC BY-NC-SA 4.0
### Citation Information
```
@InProceedings{antloga2022gkomet,
title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET},
author={Antloga, \v{S}pela},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)},
year={2022},
pages={271-277}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| cjvt/gkomet | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sl",
"license:cc-by-nc-sa-4.0",
"metaphor-classification",
"metonymy-classification",
"metaphor-frame-classification",
"multiword-expression-detection",
"region:us"
] | 2022-09-19T17:00:53+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": [], "pretty_name": "G-KOMET", "tags": ["metaphor-classification", "metonymy-classification", "metaphor-frame-classification", "multiword-expression-detection"]} | 2022-11-27T16:40:19+00:00 |
f2b534c65a64e8425f7aa01659af23493d84696e | hemangjoshi37a/token_classification_ratnakar_1300 | [
"license:mit",
"region:us"
] | 2022-09-19T17:02:43+00:00 | {"license": "mit"} | 2022-09-19T17:03:46+00:00 |
|
67f7da031721a14cc391c7fa7c8d96411282d8a3 | **(Jan. 8 2024) Test set labels are released**
# Dataset Card for SLUE
## Table of Contents
- [Dataset Card for SLUE](#dataset-card-for-slue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr)
- [Named Entity Recognition (NER)](#named-entity-recognition-ner)
- [Sentiment Analysis (SA)](#sentiment-analysis-sa)
- [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [voxpopuli](#voxpopuli)
- [voxceleb](#voxceleb)
- [Data Fields](#data-fields)
- [voxpopuli](#voxpopuli-1)
- [voxceleb](#voxceleb-1)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset)
- [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset)
- [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit)
- **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/)
- **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf)
- **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html)
- **Size of downloaded dataset files:** 1.95 GB
- **Size of the generated dataset:** 9.59 MB
- **Total amount of disk used:** 1.95 GB
### Dataset Summary
We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to
- Track research progress on multiple SLU tasks
- Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks
- Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.
For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details.
### Supported Tasks and Leaderboards
#### Automatic Speech Recognition (ASR)
Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).
#### Named Entity Recognition (NER)
Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.
#### Sentiment Analysis (SA)
Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed]
#### How-to-submit for your test set evaluation
See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html
### Languages
The language data in SLUE is in English.
## Dataset Structure
### Data Instances
#### voxpopuli
- **Size of downloaded dataset files:** 398.45 MB
- **Size of the generated dataset:** 5.81 MB
- **Total amount of disk used:** 404.26 MB
An example of 'train' looks as follows.
```
{'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3',
'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg',
'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217,
0.07835515, 0.07845446], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'None',
'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will',
'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will',
'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]},
'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [227, 177, 28, 0],
'length': [6, 6, 21, 4]},
'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'],
'start': [243, 194, 45, 0],
'length': [6, 6, 21, 23]}}
```
#### voxceleb
- **Size of downloaded dataset files:** 1.55 GB
- **Size of the generated dataset:** 3.78 MB
- **Total amount of disk used:** 1.55 GB
An example of 'train' looks as follows.
```
{'id': 'id10059_229vKIGbxrI_00004',
'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac',
'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691,
0.00100708, 0.00033569], dtype=float32),
'sampling_rate': 16000},
'speaker_id': 'id10059',
'normalized_text': 'of god what is a creator the almighty that uh',
'sentiment': 'Neutral',
'start_second': 0.45,
'end_second': 4.52}
```
### Data Fields
#### voxpopuli
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `raw_text`: a `string` feature that contains the raw transcription of the audio.
- `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**.
- `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes.
- `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes.
- `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`).
- `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**.
Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity.
#### voxceleb
- `id`: a `string` id of an instance.
- `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `speaker_id`: a `string` of the speaker id.
- `normalized_text`: a `string` feature that contains the transcription of the audio segment.
- `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`.
- `start_second`: a `float` feature that specifies the start second of the audio segment.
- `end_second`: a `float` feature that specifies the end second of the audio segment.
### Data Splits
| |train|validation|test|
|---------|----:|---------:|---:|
|voxpopuli| 5000| 1753|1842|
|voxceleb | 5777| 1454|3553|
Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
#### SLUE-VoxPopuli Dataset
SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/)
Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0.
#### SLUE-VoxCeleb Dataset
SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.
##### Original License of OXFORD VGG VoxCeleb Dataset
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.
The speakers span a wide range of different ethnicities, accents, professions and ages.
We provide Youtube URLs, associated face detections, and timestamps, as
well as cropped audio segments and cropped face videos from the
dataset. The copyright of both the original and cropped versions
of the videos remains with the original owners.
The data is covered under a Creative Commons
Attribution 4.0 International license (Please read the
license terms here. https://creativecommons.org/licenses/by/4.0/).
Downloading this dataset implies agreement to follow the same
conditions for any modification and/or
re-distribution of the dataset in any form.
Additionally any entity using this dataset agrees to the following conditions:
THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Please cite [1,2] below if you make use of the dataset.
[1] J. S. Chung, A. Nagrani, A. Zisserman
VoxCeleb2: Deep Speaker Recognition
INTERSPEECH, 2018.
[2] A. Nagrani, J. S. Chung, A. Zisserman
VoxCeleb: a large-scale speaker identification dataset
INTERSPEECH, 2017
### Citation Information
```
@inproceedings{shon2022slue,
title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J},
booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7927--7931},
year={2022},
organization={IEEE}
}
```
### Contributions
Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset. | asapp/slue | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:sentiment-analysis",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"arxiv:2111.10367",
"region:us"
] | 2022-09-19T17:07:59+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0", "cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification", "text-classification", "token-classification"], "task_ids": ["sentiment-analysis", "named-entity-recognition"], "paperswithcode_id": "slue", "pretty_name": "SLUE (Spoken Language Understanding Evaluation benchmark)", "tags": [], "configs": [{"config_name": "voxceleb", "data_files": [{"split": "train", "path": "voxceleb/train-*"}, {"split": "validation", "path": "voxceleb/validation-*"}, {"split": "test", "path": "voxceleb/test-*"}]}, {"config_name": "voxpopuli", "data_files": [{"split": "train", "path": "voxpopuli/train-*"}, {"split": "validation", "path": "voxpopuli/validation-*"}, {"split": "test", "path": "voxpopuli/test-*"}]}], "dataset_info": [{"config_name": "voxceleb", "features": [{"name": "index", "dtype": "int32"}, {"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speaker_id", "dtype": "string"}, {"name": "normalized_text", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "start_second", "dtype": "float64"}, {"name": "end_second", "dtype": "float64"}, {"name": "local_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 875444694.0, "num_examples": 5777}, {"name": "validation", "num_bytes": 213065127.0, "num_examples": 1454}, {"name": "test", "num_bytes": 545473843.0, "num_examples": 3553}], "download_size": 1563299519, "dataset_size": 1633983664.0}, {"config_name": "voxpopuli", "features": [{"name": "index", "dtype": "int32"}, {"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speaker_id", "dtype": "string"}, {"name": "normalized_text", "dtype": "string"}, {"name": "raw_text", "dtype": "string"}, {"name": "raw_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "normalized_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "raw_combined_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "normalized_combined_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "local_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 240725040.0, "num_examples": 5000}, {"name": "validation", "num_bytes": 83155577.099, "num_examples": 1753}, {"name": "test", "num_bytes": 83518039.328, "num_examples": 1842}], "download_size": 404062275, "dataset_size": 407398656.427}]} | 2024-01-12T05:15:39+00:00 |
ff88393aa85808a6172b21e19e27a40ab882a734 | Initial annotated dataset derived from `ImageIN/IA_unlabelled` | ImageIN/ImageIn_annotations | [
"task_categories:image-classification",
"region:us"
] | 2022-09-19T17:16:25+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "ImageIn hand labelled", "tags": []} | 2022-09-26T11:20:03+00:00 |
c76f26430961c9cb3dd896809d3b303225bd6003 | A piece of Federico García Lorca's body of work. | smkerr/lorca | [
"region:us"
] | 2022-09-19T19:00:37+00:00 | {} | 2022-09-19T19:02:06+00:00 |
3958a8cdbd470eff2573faad9d0ff7eeac90e6c3 | darcksky/All-Rings | [
"license:afl-3.0",
"region:us"
] | 2022-09-19T19:05:00+00:00 | {"license": "afl-3.0"} | 2022-09-19T19:13:29+00:00 |
|
da12b1d9362a363f50e046dd887987142fee4ff8 | wgarstka/test | [
"license:other",
"region:us"
] | 2022-09-19T19:10:45+00:00 | {"license": "other"} | 2022-09-19T19:10:45+00:00 |
|
9fbd8304e81d1eadc8eda9738dec458621f25f79 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-30b-copy
* Dataset: autoevaluate/zero-shot-classification-sample
* Config: autoevaluate--zero-shot-classification-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-1f3143-1511754885 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T19:30:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-30b-copy", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-19T20:08:28+00:00 |
7b69020abbf7a32f15059b9d57dc576ad84006c5 | spacemanidol/rewrite-noisy-queries | [
"license:mit",
"region:us"
] | 2022-09-19T19:37:46+00:00 | {"license": "mit"} | 2022-09-19T19:55:24+00:00 |
|
f3d381966197dcc430263fbd80b5aa01fedadfb6 | mertcobanov/mozart-diff-small-256 | [
"task_categories:image-to-image",
"size_categories:100K<n<1M",
"region:us"
] | 2022-09-19T20:46:03+00:00 | {"size_categories": ["100K<n<1M"], "task_categories": ["image-to-image"], "pretty_name": "Mozart Operas"} | 2023-01-05T21:33:43+00:00 |
|
084060f16b46f3165318f760b2339208b19a0bde |
# Dataset Card for ASQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/language/tree/master/language/asqa
- **Paper:** https://arxiv.org/abs/2204.06092
- **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html
### Dataset Summary
ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.
### Supported Tasks and Leaderboards
Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html)
### Languages
- English
## Dataset Structure
### Data Instances
```py
{
"ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?",
"qa_pairs": [
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?",
"short_answers": [
"the people of the United States"
],
"wikipage": None
},
{
"context": "No context provided",
"question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?",
"short_answers": [
"United States government"
],
"wikipage": None
}
],
"wikipages": [
{
"title": "Civil Liberties Act of 1988",
"url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988"
}
],
"annotations": [
{
"knowledge": [
{
"content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.",
"wikipage": "Civil Liberties Act of 1988"
}
],
"long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations."
}
],
"sample_id": -4557617869928758000
}
```
### Data Fields
- `ambiguous_question`: ambiguous question from AmbigQA.
- `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators.
- `annotations/knowledge`: list of additional knowledge pieces.
- `annotations/knowledge/content`: a passage from Wikipedia.
- `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from.
- `annotations/long_answer`: annotation.
- `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation.
- `qa_pairs/context`: additional context provided.
- `qa_pairs/question`: disambiguated question from AmbigQA.
- `qa_pairs/short_answers`: list of short answers from AmbigQA.
- `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from.
- `sample_id`: the unique id of the sample
- `wikipages`: list of Wikipedia pages visited by AmbigQA annotators.
- `wikipages/title`: title of the Wikipedia page.
- `wikipages/url`: link to the Wikipedia page.
### Data Splits
| **Split** | **Instances** |
|-----------|---------------|
| Train | 4353 |
| Dev | 948 |
## Additional Information
### Contributions
Thanks to [@din0s](https://github.com/din0s) for adding this dataset. | din0s/asqa | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|ambig_qa",
"language:en",
"license:apache-2.0",
"factoid questions",
"long-form answers",
"arxiv:2204.06092",
"region:us"
] | 2022-09-19T21:25:51+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|ambig_qa"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "ASQA", "tags": ["factoid questions", "long-form answers"]} | 2022-09-20T15:14:54+00:00 |
c5a4721b5d4ff814a1af2020df60566a313ea67b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-30b-copy
* Dataset: Tristan/zero-shot-classification-large-test
* Config: Tristan--zero-shot-classification-large-test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. | autoevaluate/autoeval-eval-Tristan__zero-shot-classification-large-test-Tristan__z-8b146c-1511954902 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-19T21:26:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-30b-copy", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-21T04:08:06+00:00 |
53485f36c96f2307855b50421da83f27bfff2397 | vincentchai/b52092000 | [
"license:apache-2.0",
"region:us"
] | 2022-09-20T02:16:34+00:00 | {"license": "apache-2.0"} | 2022-09-20T02:16:34+00:00 |
|
922289449f1fd355224c344759378c53532a2189 | Natmat/Test | [
"license:other",
"region:us"
] | 2022-09-20T02:52:04+00:00 | {"license": "other"} | 2022-10-19T05:59:35+00:00 |
|
baa096440c81620325d5c6f774eacb668dbd1db8 | - 사회과학-en-ko 번역 말뭉치
| bongsoo/social_science_en_ko | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-09-20T03:45:54+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T23:09:30+00:00 |
8ffecf6e6c61389f9c02f13f3875d810ff506fa3 |
- 뉴스&일상대화 en-ko 번역 말뭉치 | bongsoo/news_talk_en_ko | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-09-20T04:10:56+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T23:09:50+00:00 |
d356ef19a4eb287e88a51d07a56b73ba88c7f188 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | ai4bharat/IndicCOPA | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|xcopa",
"language:as",
"language:bn",
"language:en",
"language:gom",
"language:gu",
"language:hi",
"language:kn",
"language:mai",
"language:ml",
"language:mr",
"language:ne",
"language:or",
"language:pa",
"language:sa",
"language:sat",
"language:sd",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-4.0",
"region:us"
] | 2022-09-20T07:18:35+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["as", "bn", "en", "gom", "gu", "hi", "kn", "mai", "ml", "mr", "ne", "or", "pa", "sa", "sat", "sd", "ta", "te", "ur"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|xcopa"], "task_categories": ["multiple-choice"], "task_ids": ["multiple-choice-qa"], "pretty_name": "IndicXCOPA", "tags": []} | 2022-12-15T11:34:32+00:00 |
e58cab3ab22391abadb7397dcc938c07ec1e91a5 | NaturalTeam/KoBART_TEST | [
"license:unknown",
"region:us"
] | 2022-09-20T07:41:33+00:00 | {"license": "unknown"} | 2022-09-20T07:41:33+00:00 |
|
f8da6feede333581902766efa79a7701e0287b44 | Shushant/NepaliCovidTweets | [
"license:other",
"region:us"
] | 2022-09-20T07:54:59+00:00 | {"license": "other"} | 2022-09-20T07:59:06+00:00 |
|
fcbf84785bd5d498892cf01a322a92bb1a17f9bb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-373400-1514054915 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T08:57:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-21T14:33:56+00:00 |
bec9eb5363a82c6de35a6426842e86f55db7e9c1 | vuksan314/Lavko | [
"license:cc",
"region:us"
] | 2022-09-20T10:47:53+00:00 | {"license": "cc"} | 2022-09-20T10:51:55+00:00 |
|
773b86a2ed4dee382df30a17ea4e00c490e5d2d1 | varun-d/demo-data | [
"license:apache-2.0",
"region:us"
] | 2022-09-20T11:28:55+00:00 | {"license": "apache-2.0"} | 2022-09-20T12:58:21+00:00 |
|
3aaacdae72ffce33d77189f33dab28e9e4f7007a | ksang/TwitchStreams | [
"region:us"
] | 2022-09-20T11:35:10+00:00 | {} | 2022-09-20T13:20:36+00:00 |
|
a3d4cb163d1cbad84af92ed4f6e9b4ada4cb0d69 | niallashley/regenerate | [
"license:cc",
"region:us"
] | 2022-09-20T13:50:05+00:00 | {"license": "cc"} | 2022-09-20T14:00:01+00:00 |
|
97139a9fbab6912b3fd89604427d4304d20847e6 |
# Dataset Card for RSDO4 en-sl parallel corpus
### Dataset Summary
The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training.
### Supported Tasks and Leaderboards
Machine translation.
### Languages
English, Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'en_seq': 'the total value of its assets exceeds EUR 30000000000;',
'sl_seq': 'skupna vrednost njenih sredstev presega 30000000000 EUR'
}
```
### Data Fields
- `en_seq`: a string containing the English sequence;
- `sl_seq`: a string containing the Slovene sequence.
## Additional Information
### Dataset Curators
Andraž Repar and Iztok Lebar Bajec.
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{rsdo4_en_sl,
title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0},
author = {Repar, Andra{\v z} and Lebar Bajec, Iztok},
url = {http://hdl.handle.net/11356/1457},
year = {2021}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| cjvt/rsdo4_en_sl | [
"task_categories:translation",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:translation",
"size_categories:100K<n<1M",
"language:en",
"language:sl",
"license:cc-by-sa-4.0",
"parallel data",
"rsdo",
"region:us"
] | 2022-09-20T14:23:40+00:00 | {"annotations_creators": ["expert-generated", "found"], "language_creators": ["crowdsourced"], "language": ["en", "sl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["translation", "text2text-generation", "text-generation"], "task_ids": [], "pretty_name": "RSDO4 en-sl parallel corpus", "tags": ["parallel data", "rsdo"]} | 2022-09-20T16:38:33+00:00 |
9ee9719a3ff0a5ef8d5e31eff4f5dd81a08fe47b | nonnon/test | [
"license:other",
"region:us"
] | 2022-09-20T14:37:10+00:00 | {"license": "other"} | 2022-09-25T12:59:28+00:00 |
|
62c78627f3072a1454fa0cb0184737cafe5e4198 |
# HumanEval-X
## Dataset Description
[HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.
## Dataset Structure
To load the dataset you need to specify a subset among the 5 exiting languages `[python, cpp, go, java, js]`. By default `python` is loaded.
```python
from datasets import load_dataset
load_dataset("THUDM/humaneval-x", "js")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'],
num_rows: 164
})
})
```
```python
next(iter(data["test"]))
{'task_id': 'JavaScript/0',
'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n',
'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n',
'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n',
'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n',
'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'}
```
## Data Fields
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ``prompt``: the function declaration and docstring, used for code generation.
* ``declaration``: only the function declaration, used for code translation.
* ``canonical_solution``: human-crafted example solutions.
* ``test``: hidden test samples, used for evaluation.
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
## Citation Information
Refer to https://github.com/THUDM/CodeGeeX. | THUDM/humaneval-x | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:apache-2.0",
"region:us"
] | 2022-09-20T15:23:53+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "HumanEval-X"} | 2022-10-25T05:08:38+00:00 |
884ea34ad5711abf4fa430a58eed5fcaf6bebaea | nlp-guild/medical-data | [
"license:mit",
"region:us"
] | 2022-09-20T15:46:48+00:00 | {"license": "mit"} | 2022-09-20T15:47:13+00:00 |
|
09a7ed9517756e50b961dd44c17d91b2a9292bb0 |
# pytorch-image-models metrics
This dataset contains metrics about the huggingface/pytorch-image-models package.
Number of repositories in the dataset: 3615
Number of packages in the dataset: 89
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/pytorch-image-models/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 18 packages that have more than 1000 stars.
There are 39 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 70536
[fastai/fastai](https://github.com/fastai/fastai): 22776
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390
[MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 6424
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115
[awslabs/autogluon](https://github.com/awslabs/autogluon): 4818
[neuml/txtai](https://github.com/neuml/txtai): 2531
[open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 2357
[open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 2271
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1999
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70536
[commaai/openpilot](https://github.com/commaai/openpilot): 35919
[facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 22287
[ray-project/ray](https://github.com/ray-project/ray): 22057
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390
[NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 9260
[microsoft/unilm](https://github.com/microsoft/unilm): 6664
[pytorch/tutorials](https://github.com/pytorch/tutorials): 6331
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115
[hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI): 4944
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 12 packages that have more than 200 forks.
There are 28 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 16175
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791
[fastai/fastai](https://github.com/fastai/fastai): 7296
[MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 1765
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217
[open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 787
[awslabs/autogluon](https://github.com/awslabs/autogluon): 638
[open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 321
[rwightman/efficientdet-pytorch](https://github.com/rwightman/efficientdet-pytorch): 265
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 247
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16175
[open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791
[commaai/openpilot](https://github.com/commaai/openpilot): 6603
[facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 6033
[ray-project/ray](https://github.com/ray-project/ray): 3879
[pytorch/tutorials](https://github.com/pytorch/tutorials): 3478
[NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 2499
[microsoft/unilm](https://github.com/microsoft/unilm): 1223
[qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217
[layumi/Person_reID_baseline_pytorch](https://github.com/layumi/Person_reID_baseline_pytorch): 928
| open-source-metrics/pytorch-image-models-dependents | [
"license:apache-2.0",
"github-stars",
"region:us"
] | 2022-09-20T17:47:36+00:00 | {"license": "apache-2.0", "pretty_name": "pytorch-image-models metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}} | 2024-02-16T20:19:14+00:00 |
6f09b80cc6924269b90040678851440eb7fca9b6 | huggingface-projects/color-palettes-sd | [
"license:cc-by-4.0",
"region:us"
] | 2022-09-20T19:44:07+00:00 | {"license": "cc-by-4.0"} | 2023-06-21T08:48:10+00:00 |
|
deed3ddd239c882afb8c65feebe82015ba82bcb5 | gexai/inquisitiveqg | [
"license:unknown",
"region:us"
] | 2022-09-20T20:13:53+00:00 | {"license": "unknown"} | 2022-09-20T20:22:53+00:00 |
|
4a8f8026a4dc86f31a7576da3a12b48008a6565a | j0hngou/ccmatrix_en-fr | [
"language:en",
"language:fr",
"region:us"
] | 2022-09-20T21:39:51+00:00 | {"language": ["en", "fr"]} | 2022-09-26T15:35:19+00:00 |
|
f0f93f25d29f82efdd73689b88b36c8fc85d4e41 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-431a89-1518654983 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T21:48:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-20T22:13:17+00:00 |
5a6a80994c21d0d9b4f87e828633e9aa549a4a8c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-7e8d42-1518754984 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T21:48:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-20T22:20:18+00:00 |
850f60cb653353971f22827cf61e6b1d1a2a53a5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-61a81c-1518854985 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T21:48:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-22T01:29:45+00:00 |
bc5a20bfe51eff9d9e3e6bfe9d02ccb09cd15f72 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-4428b0-1518954986 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T21:48:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-22T03:13:05+00:00 |
eb2885f64a337ab00115293d9856a96f80b30d40 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-b534aa-1519254997 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-20T22:47:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-20T23:18:15+00:00 |
a760d3533762a423ca38cb5f4d1d59a31f016a68 | Moussab/ORKG-training-evaluation-set | [
"license:afl-3.0",
"region:us"
] | 2022-09-20T23:39:50+00:00 | {"license": "afl-3.0"} | 2022-10-12T12:44:47+00:00 |
|
aac811df777aae214beb430564b14042ac1b4618 | slartibartfast/emojis2 | [
"license:openrail",
"region:us"
] | 2022-09-20T23:42:17+00:00 | {"license": "openrail"} | 2022-09-21T13:16:56+00:00 |
|
35887c2231bd760062d6b0089c0f147ae61a111e | Moussab/evaluation-vanilla-models | [
"license:afl-3.0",
"region:us"
] | 2022-09-20T23:42:56+00:00 | {"license": "afl-3.0"} | 2022-09-20T23:44:35+00:00 |
|
4eb43f034eb3fac376bb1c84851523adb09029f0 | Moussab/evaluation-results-fine-tuned-models | [
"license:afl-3.0",
"region:us"
] | 2022-09-20T23:45:37+00:00 | {"license": "afl-3.0"} | 2022-09-20T23:46:23+00:00 |
|
ae75e6b3d921b85c9a7f5510181d1a32fc140c3c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-dd03f7-1519455003 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:14:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-21T16:34:50+00:00 |
84e95341fadae3179e6f9418e04ab530f0411814 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-launch__gov_report-plain_text-4ad6c8-1519755004 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:15:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-21T06:37:56+00:00 |
8fcbf087a8ba256d1d8ad78d5474126481b43e73 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-big_patent-y-b4cccf-1519855005 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:15:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}} | 2022-09-22T05:24:35+00:00 |
94ff6a5935f6cd3ff8a915f76e6852c4a3667a7f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-a5c306-1520055006 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:15:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-21T01:23:40+00:00 |
169d0612fccaa4dd7bff2fa33ab533b40aeef69e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-bf100b-1520255007 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:15:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": ["rouge"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-21T01:23:16+00:00 |
523d566065cd18bc42172c82f9ffa933eaf29b05 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: Tristan/zero_shot_classification_test
* Config: Tristan--zero_shot_classification_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. | autoevaluate/autoeval-eval-Tristan__zero_shot_classification_test-Tristan__zero_sh-c10c5c-1520355008 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T01:23:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero_shot_classification_test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-66b-copy", "metrics": [], "dataset_name": "Tristan/zero_shot_classification_test", "dataset_config": "Tristan--zero_shot_classification_test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-21T02:16:17+00:00 |
5d3309b8aa10d7cf28752a9589c8a8a99325e069 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ColdYoungGuy](https://huggingface.co/ColdYoungGuy) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-e4ddf6-1520555010 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T03:30:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-21T03:32:36+00:00 |
6a940d4970bd3b248c1d6e3f35bd59c7befdfade | HighSodium/inflation | [
"license:odbl",
"region:us"
] | 2022-09-21T07:01:53+00:00 | {"license": "odbl"} | 2022-09-21T07:07:12+00:00 |
|
a8f7d8754929868c25e7139e643b59a41dc19964 | Harrietofthesea/public_test | [
"license:cc",
"region:us"
] | 2022-09-21T07:26:50+00:00 | {"license": "cc"} | 2022-09-21T07:31:29+00:00 |
|
af9881620d1112fee620f0b76a93233233d0e017 | sdhj/wwww | [
"license:apache-2.0",
"region:us"
] | 2022-09-21T08:27:24+00:00 | {"license": "apache-2.0"} | 2022-09-21T08:47:48+00:00 |
|
f9fb35f4134e32b9c8100199d949398fd6d08a5f | We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by `source_id`:
Validation: 4420696 4448760 4461799 4469836 4473238 4482110
Test: 4432298 4450488 4470290 4479741 4483338 4485244
Train: remainder
Official script for processing these splits will be released shortly.
| sanchit-gandhi/earnings22_split | [
"region:us"
] | 2022-09-21T09:35:49+00:00 | {} | 2022-09-23T08:44:26+00:00 |
16c96aacfd2f858c7577cd1944a8e67992036e8c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-x-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-e42237-1523455078 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T10:41:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-09-21T17:28:50+00:00 |
b87e432d0decd12b0de10ce6c92a3c75536f2b3f | AIRI-Institute/I4TALK_DATA | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-09-21T10:51:05+00:00 | {"license": "cc-by-sa-4.0"} | 2022-09-21T10:51:05+00:00 |
|
7c1cc64b8570c0d0882b285941fd625c4bbb886c |
# 1 Source
Source: https://github.com/alibaba-research/ChineseBLUE
# 2 Definition of the tagset
```python
tag_set = [
'B_手术',
'I_疾病和诊断',
'B_症状',
'I_解剖部位',
'I_药物',
'B_影像检查',
'B_药物',
'B_疾病和诊断',
'I_影像检查',
'I_手术',
'B_解剖部位',
'O',
'B_实验室检验',
'I_症状',
'I_实验室检验'
]
tag2id = lambda tag: tag_set.index(tag)
id2tag = lambda id: tag_set[id]
```
# 3 Citation
To use this dataset in your work please cite:
Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining
```
@article{zhang2020conceptualized,
title={Conceptualized Representation Learning for Chinese Biomedical Text Mining},
author={Zhang, Ningyu and Jia, Qianghuai and Yin, Kangping and Dong, Liang and Gao, Feng and Hua, Nengwei},
journal={arXiv preprint arXiv:2008.10813},
year={2020}
}
```
| Adapting/chinese_biomedical_NER_dataset | [
"license:mit",
"region:us"
] | 2022-09-21T11:52:05+00:00 | {"license": "mit"} | 2022-09-21T17:21:15+00:00 |
9377b07c09c9e734468cb85f7a58b16c46aa264c | myt517/GID_benchmark | [
"license:apache-2.0",
"region:us"
] | 2022-09-21T12:42:32+00:00 | {"license": "apache-2.0"} | 2022-09-21T13:06:09+00:00 |
|
b52c6bf1f753da7c473f7954708a160b26fcaa6e | ArneBinder/xfund | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-21T13:57:42+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-09-21T14:12:34+00:00 |
|
51d9269a2818c7fe39b9380efc9a62f40a8e5b2e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-bf74a8-1524255094 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T14:21:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-21T17:43:44+00:00 |
662fce7ab3d2e18087973b1f15470b1dfaf81f9e |
# Dataset Card for TellMeWhy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://stonybrooknlp.github.io/tellmewhy/
- **Repository:** https://github.com/StonyBrookNLP/tellmewhy
- **Paper:** https://aclanthology.org/2021.findings-acl.53/
- **Leaderboard:** None
- **Point of Contact:** [Yash Kumar Lal](mailto:[email protected])
### Dataset Summary
TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.
### Supported Tasks and Leaderboards
The dataset is designed to test why-question answering abilities of models when bound by local context.
### Languages
English
## Dataset Structure
### Data Instances
A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.
```
{
"narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.",
"question":"Why did Cam order a pizza?",
"original_sentence_for_question":"Cam ordered a pizza and took it home.",
"narrative_lexical_overlap":0.3333333333,
"is_ques_answerable":"Not Answerable",
"answer":"Cam was hungry.",
"is_ques_answerable_annotator":"Not Answerable",
"original_narrative_form":[
"Cam ordered a pizza and took it home.",
"He opened the box to take out a slice.",
"Cam discovered that the store did not cut the pizza for him.",
"He looked for his pizza cutter but did not find it.",
"He had to use his chef knife to cut a slice."
],
"question_meta":"rocstories_narrative_41270_sentence_0_question_0",
"helpful_sentences":[
],
"human_eval":false,
"val_ann":[
],
"gram_ann":[
]
}
```
### Data Fields
- `question_meta` - Unique meta for each question in the corpus
- `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated
- `question` - Why question about an action or event in the narrative
- `answer` - Crowdsourced answer to the question
- `original_sentence_for_question` - Sentence in narrative from which question was generated
- `narrative_lexical_overlap` - Unigram overlap of answer with the narrative
- `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models.
- `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.
- `original_narrative_form` - ROCStories narrative as an array of its sentences
- `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.
- `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False.
- `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False.
### Data Splits
The data is split into training, valiudation, and test sets.
| Train | Valid | Test |
| ------ | ----- | ----- |
| 23964 | 2992 | 3563 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
ROCStories corpus (Mostafazadeh et al, 2016)
#### Initial Data Collection and Normalization
ROCStories was used to create why-questions related to actions and events in the stories.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.
#### Who are the annotators?
Amazon Mechanical Turk workers
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Evaluation
To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy.
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{lal-etal-2021-tellmewhy,
title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives",
author = "Lal, Yash Kumar and
Chambers, Nathanael and
Mooney, Raymond and
Balasubramanian, Niranjan",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.53",
doi = "10.18653/v1/2021.findings-acl.53",
pages = "596--610",
}
```
### Contributions
Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset. | StonyBrookNLP/tellmewhy | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-09-21T15:11:29+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "TellMeWhy"} | 2024-01-24T21:12:22+00:00 |
0af0ec66aa94b834cd671169833768ef6063285e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: mathemakitten/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-169e67-1524755111 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T16:28:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "mathemakitten/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-21T16:48:48+00:00 |
c4d0527ce23b301ba6b56bcf1c32d302d75c9bfb | MvsSrs/quistest | [
"license:unknown",
"region:us"
] | 2022-09-21T16:38:44+00:00 | {"license": "unknown"} | 2022-09-26T20:09:58+00:00 |
|
71fce68bfcbd42b9ac56f691818a957ef3c8f4fa | PotatoGod/testing | [
"license:afl-3.0",
"region:us"
] | 2022-09-21T16:50:32+00:00 | {"license": "afl-3.0"} | 2022-09-22T08:19:25+00:00 |
|
d27fa3d9aea71a1de1cfc280bb534887b05f510d | This dataset consists of Pubchem molecules downloaded from: https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/
There are in total ~85M compounds for training, with an additional ~10M held out for validation and testing. | zpn/pubchem_selfies | [
"license:openrail",
"region:us"
] | 2022-09-21T18:51:06+00:00 | {"license": "openrail"} | 2022-10-04T15:15:19+00:00 |
42a28644fe76522463f587f3719cab6a920f86a5 | mehr4n-m/parsinlu-en-fa-structrual-edit | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-21T20:17:17+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-11-10T22:59:16+00:00 |
|
8852346e4b76d1f815e1b272c840d45d7dc08ea8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: autoevaluate/zero-shot-classification
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-f407ed-1527355152 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-21T21:30:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-09-21T21:50:42+00:00 |
3af942a32b98c8e16043ec591f92f5c368ed2953 |
# Avatar Dataset
Raw data stack of 18,000 sample images created for [Avatar AI](https://t.me/AvatarAIBot).
## Features
- 256X256 Medium Quality
- Micro Bloom
| phaticusthiccy/avatar | [
"region:us"
] | 2022-09-21T21:30:24+00:00 | {} | 2022-09-21T21:40:14+00:00 |
dc30b042b8caa6fc0cdbe7511e1867919f10fd80 |
# How Resilient are Imitation Learning Methods to Sub-Optimal Experts?
## Related Work
Trajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]()
The code that uses this data is on GitHub: https://github.com/NathanGavenski/How-resilient-IL-methods-are
# Structure
These trajectories are formed by using [Stable Baselines](https://stable-baselines.readthedocs.io/en/master/).
Each file is a dictionary of a set of trajectories with the following keys:
* actions: the action in the given timestamp `t`
* obs: current state in the given timestamp `t`
* rewards: reward retrieved after the action in the given timestamp `t`
* episode_returns: The aggregated reward of each episode (each file consists of 5000 runs)
* episode_Starts: Whether that `obs` is the first state of an episode (boolean list)
## Citation Information
```
@inproceedings{gavenski2022how,
title={How Resilient are Imitation Learning Methods to Sub-Optimal Experts?},
author={Nathan Gavenski and Juarez Monteiro and Adilson Medronha and Rodrigo Barros},
booktitle={2022 Brazilian Conference on Intelligent Systems (BRACIS)},
year={2022},
organization={IEEE}
}
```
## Contact:
- [Nathan Schneider Gavenski]([email protected])
- [Juarez Monteiro]([email protected])
- [Adilson Medronha]([email protected])
- [Rodrigo C. Barros]([email protected])
| NathanGavenski/How-Resilient-are-Imitation-Learning-Methods-to-Sub-Optimal-Experts | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"size_categories:100B<n<1T",
"source_datasets:original",
"license:mit",
"Imitation Learning",
"Expert Trajectories",
"Classic Control",
"region:us"
] | 2022-09-21T22:41:37+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["mit"], "multilinguality": [], "size_categories": ["100B<n<1T"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "How Resilient are Imitation Learning Methods to Sub-Optimal Experts?", "tags": ["Imitation Learning", "Expert Trajectories", "Classic Control"]} | 2022-10-25T13:48:38+00:00 |
fc13ca9b1583fd4f16359a22cc7053eeb6d75f76 | mafzal/SOAP-notes | [
"license:apache-2.0",
"region:us"
] | 2022-09-22T00:18:51+00:00 | {"license": "apache-2.0"} | 2022-09-22T00:39:39+00:00 |
|
cee49c3f84bb914fbde672730c614a1cb2bff03f | dataDRVN/dog-wesley | [
"license:afl-3.0",
"region:us"
] | 2022-09-22T02:44:21+00:00 | {"license": "afl-3.0"} | 2022-09-22T02:52:54+00:00 |
|
aba349e6b3a4d06820576289db881e37f2d5c5e3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-fanpage
* Dataset: scan
* Config: simple
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@test_yoon_0921](https://huggingface.co/test_yoon_0921) for evaluating this model. | autoevaluate/autoeval-eval-scan-simple-0b9bd3-1528755178 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T03:23:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scan"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": [], "dataset_name": "scan", "dataset_config": "simple", "dataset_split": "train", "col_mapping": {"text": "commands", "target": "actions"}}} | 2022-09-22T03:29:45+00:00 |
8381f2d7cd133cc20378a943ae802a21e0dd1a11 | # AutoTrain Dataset for project: nllb_600_ft
## Dataset Description
This dataset has been automatically processed by AutoTrain for project nllb_600_ft.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": "772",
"feat_URL": "https://en.wikivoyage.org/wiki/Apia",
"feat_domain": "wikivoyage",
"feat_topic": "Travel",
"feat_has_image": "0",
"feat_has_hyperlink": "0",
"text": "All the ships were sunk, except for one British cruiser. Nearly 200 American and German lives were lost.",
"target": "\u0628\u0647\u200c\u062c\u0632 \u06cc\u06a9 \u06a9\u0634\u062a\u06cc \u062c\u0646\u06af\u06cc \u0627\u0646\u06af\u0644\u06cc\u0633\u06cc \u0647\u0645\u0647 \u06a9\u0634\u062a\u06cc\u200c\u0647\u0627 \u063a\u0631\u0642 \u0634\u062f\u0646\u062f\u060c \u0648 \u0646\u0632\u062f\u06cc\u06a9 \u0628\u0647 200 \u0646\u0641\u0631 \u0622\u0645\u0631\u06cc\u06a9\u0627\u06cc\u06cc \u0648 \u0622\u0644\u0645\u0627\u0646\u06cc \u062c\u0627\u0646 \u062e\u0648\u062f \u0631\u0627 \u0627\u0632 \u062f\u0633\u062a \u062f\u0627\u062f\u0646\u062f."
},
{
"feat_id": "195",
"feat_URL": "https://en.wikinews.org/wiki/Mitt_Romney_wins_Iowa_Caucus_by_eight_votes_over_surging_Rick_Santorum",
"feat_domain": "wikinews",
"feat_topic": "Politics",
"feat_has_image": "0",
"feat_has_hyperlink": "0",
"text": "Bachmann, who won the Ames Straw Poll in August, decided to end her campaign.",
"target": "\u0628\u0627\u062e\u0645\u0646\u060c \u06a9\u0647 \u062f\u0631 \u0645\u0627\u0647 \u0622\u06af\u0648\u0633\u062a \u0628\u0631\u0646\u062f\u0647 \u0646\u0638\u0631\u0633\u0646\u062c\u06cc \u0622\u0645\u0633 \u0627\u0633\u062a\u0631\u0627\u0648 \u0634\u062f\u060c \u062a\u0635\u0645\u06cc\u0645 \u06af\u0631\u0641\u062a \u06a9\u0645\u067e\u06cc\u0646 \u062e\u0648\u062f \u0631\u0627 \u062e\u0627\u062a\u0645\u0647 \u062f\u0647\u062f."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='string', id=None)",
"feat_URL": "Value(dtype='string', id=None)",
"feat_domain": "Value(dtype='string', id=None)",
"feat_topic": "Value(dtype='string', id=None)",
"feat_has_image": "Value(dtype='string', id=None)",
"feat_has_hyperlink": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1608 |
| valid | 402 |
| mehr4n-m/autotrain-data-nllb_600_ft | [
"region:us"
] | 2022-09-22T04:51:54+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-09-22T04:54:15+00:00 |
15477fbdfae891174be78e6285353d67d3b712cb |
# Dataset Card for ssj500k
**Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.
### Dataset Summary
The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:
- named entity recognition (config `named_entity_recognition`)
- dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`)
- dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`)
- semantic role labeling (config `semantic_role_labeling`)
- multi-word expressions (config `multiword_expressions`)
If you want to load all the data along with their partial annotations, please use the config `all_data`.
\* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._
### Supported Tasks and Leaderboards
Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset (using the config `all_data`):
```
{
'id_doc': 'ssj1',
'idx_par': 0,
'idx_sent': 0,
'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'],
'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'],
'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'],
'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'],
'has_ne_ann': True,
'has_ud_dep_ann': True,
'has_jos_dep_ann': True,
'has_srl_ann': True,
'has_mwe_ann': True,
'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'],
'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5],
'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'],
'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1],
'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'],
'srl_info': [
{'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'},
{'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'},
{'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'},
{'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'},
{'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'},
{'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'}
],
'mwe_info': [
{'type': 'IRV', 'word_indices': [7, 8]}
]
}
```
### Data Fields
The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs.
- `id_doc`: a string containing the identifier of the document;
- `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of;
- `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph;
- `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149;
- `words`: a list of strings containing the words in the current sentence;
- `lemmas`: a list of strings containing the lemmas in the current sentence;
- `msds`: a list of strings containing the morphosyntactic description of words in the current sentence;
- `has_ne_ann`: a bool indicating whether the current example has named entities annotated;
- `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated;
- `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated;
- `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated;
- `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated;
- `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`;
- `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`;
- `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`;
- `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`;
- `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`;
- `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty;
- `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression;
#### Data fields in 'named_entity_recognition'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags']
```
#### Data fields in 'dependency_parsing_ud'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel']
```
#### Data fields in 'dependency_parsing_jos'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel']
```
#### Data fields in 'semantic_role_labeling'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info']
```
#### Data fields in 'multiword_expressions'
```
['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info']
```
## Additional Information
### Dataset Curators
Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
The paper describing the dataset:
```
@InProceedings{krek2020ssj500k,
title = {The ssj500k Training Corpus for Slovene Language Processing},
author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez},
booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities},
year={2020},
pages={24-33}
}
```
The resource itself:
```
@misc{krek2021clarinssj500k,
title = {Training corpus ssj500k 2.3},
author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja},
url = {http://hdl.handle.net/11356/1434},
year = {2021} }
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | cjvt/ssj500k | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"language:sl",
"license:cc-by-nc-sa-4.0",
"semantic-role-labeling",
"multiword-expression-detection",
"region:us"
] | 2022-09-22T05:31:03+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "10K<n<100K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech", "lemmatization", "parsing"], "pretty_name": "ssj500k", "tags": ["semantic-role-labeling", "multiword-expression-detection"]} | 2022-12-09T08:58:50+00:00 |
9f0ee7856c82c2e53f74187e8e6f62bf5f401806 | christianwbsn/indotacos | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-09-22T05:42:41+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-09-22T05:47:12+00:00 |
|
69c6690b6b195935df66f1942f221dd459f561cb | biomegix/soap-notes | [
"license:apache-2.0",
"region:us"
] | 2022-09-22T07:04:39+00:00 | {"license": "apache-2.0"} | 2022-09-22T07:20:42+00:00 |
|
39256ba0c7edbf7fa945f2fcf44ee1a42c5a89d1 | Nadav/runaway_scans | [
"license:afl-3.0",
"region:us"
] | 2022-09-22T07:55:37+00:00 | {"license": "afl-3.0"} | 2022-09-22T07:57:09+00:00 |
|
80845435ce686b8a9dbf70a05452fbfb8e09cdd7 |
# Dataset Card for Fashionpedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fashionpedia.github.io/home/index.html
- **Repository:** https://github.com/cvdfoundation/fashionpedia
- **Paper:** https://arxiv.org/abs/2004.12276
### Dataset Summary
Fashionpedia is a dataset mapping out the visual aspects of the fashion world.
From the paper:
> Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.
Fashionpedia has:
- 46781 images
- 342182 bounding-boxes
### Supported Tasks
- Object detection
- Image classification
### Languages
All of annotations use English as primary language.
## Dataset Structure
The dataset is structured as follows:
```py
DatasetDict({
train: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 45623
})
val: Dataset({
features: ['image_id', 'image', 'width', 'height', 'objects'],
num_rows: 1158
})
})
```
### Data Instances
An example of the data for one image is:
```py
{'image_id': 23,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=682x1024>,
'width': 682,
'height': 1024,
'objects': {'bbox_id': [150311, 150312, 150313, 150314],
'category': [23, 23, 33, 10],
'bbox': [[445.0, 910.0, 505.0, 983.0],
[239.0, 940.0, 284.0, 994.0],
[298.0, 282.0, 386.0, 352.0],
[210.0, 282.0, 448.0, 665.0]],
'area': [1422, 843, 373, 56375]}}
```
With the type of each field being defined as:
```py
{'image_id': Value(dtype='int64'),
'image': Image(decode=True),
'width': Value(dtype='int64'),
'height': Value(dtype='int64'),
'objects': Sequence(feature={
'bbox_id': Value(dtype='int64'),
'category': ClassLabel(num_classes=46, names=['shirt, blouse', 'top, t-shirt, sweatshirt', 'sweater', 'cardigan', 'jacket', 'vest', 'pants', 'shorts', 'skirt', 'coat', 'dress', 'jumpsuit', 'cape', 'glasses', 'hat', 'headband, head covering, hair accessory', 'tie', 'glove', 'watch', 'belt', 'leg warmer', 'tights, stockings', 'sock', 'shoe', 'bag, wallet', 'scarf', 'umbrella', 'hood', 'collar', 'lapel', 'epaulette', 'sleeve', 'pocket', 'neckline', 'buckle', 'zipper', 'applique', 'bead', 'bow', 'flower', 'fringe', 'ribbon', 'rivet', 'ruffle', 'sequin', 'tassel']),
'bbox': Sequence(feature=Value(dtype='float64'), length=4),
'area': Value(dtype='int64')},
length=-1)}
```
### Data Fields
The dataset has the following fields:
- `image_id`: Unique numeric ID of the image.
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: Image width.
- `height`: Image height.
- `objects`: A dictionary containing bounding box metadata for the objects in the image:
- `bbox_id`: Unique numeric ID of the bounding box annotation.
- `category`: The object’s category.
- `area`: The area of the bounding box.
- `bbox`: The object’s bounding box (in the Pascal VOC format)
### Data Splits
| | Train | Validation | Test |
|----------------|--------|------------|------|
| Images | 45623 | 1158 | 0 |
| Bounding boxes | 333401 | 8781 | 0 |
## Additional Information
### Licensing Information
Fashionpedia is licensed under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@inproceedings{jia2020fashionpedia,
title={Fashionpedia: Ontology, Segmentation, and an Attribute Localization Dataset},
author={Jia, Menglin and Shi, Mengyun and Sirotenko, Mikhail and Cui, Yin and Cardie, Claire and Hariharan, Bharath and Adam, Hartwig and Belongie, Serge}
booktitle={European Conference on Computer Vision (ECCV)},
year={2020}
}
```
### Contributions
Thanks to [@blinjrm](https://github.com/blinjrm) for adding this dataset.
| detection-datasets/fashionpedia | [
"task_categories:object-detection",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"object-detection",
"fashion",
"computer-vision",
"arxiv:2004.12276",
"region:us"
] | 2022-09-22T09:33:24+00:00 | {"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "paperswithcode_id": "fashionpedia", "pretty_name": "Fashionpedia", "tags": ["object-detection", "fashion", "computer-vision"]} | 2022-09-22T12:22:02+00:00 |
871826e171a2cf997849318707f1a6970bc53be6 | This data set is created by randomly sampling 1M documents from [the large supervised proportional mixture](https://github.com/google-research/text-to-text-transfer-transformer/blob/733428af1c961e09ea0b7292ad9ac9e0e001f8a5/t5/data/mixtures.py#L193) from the [T5](https://github.com/google-research/text-to-text-transfer-transformer) repository.
The code to produce this sampled dataset can be found [here](https://github.com/chenyu-jiang/text-to-text-transfer-transformer/blob/main/prepare_dataset.py). | jchenyu/t5_large_supervised_proportional_1M | [
"license:apache-2.0",
"region:us"
] | 2022-09-22T10:21:39+00:00 | {"license": "apache-2.0"} | 2022-09-22T10:35:08+00:00 |
2db8cc29752777441ed3bed7ca97352171059550 |
# Dataset Card for SemCor
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://web.eecs.umich.edu/~mihalcea/downloads.html#semcor
- **Repository:**
- **Paper:** https://aclanthology.org/H93-1061/
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to
WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton
University.
Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot
be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to
symbolize a missing sense in WordNet 3.0.
The automatic mapping was performed within the Language and Information
Technologies lab at UNT, by Rada Mihalcea ([email protected]).
THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES,
EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO
REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR
PURPOSE.
In agreement with the license from Princeton Univerisity, you are granted
permission to use, copy, modify and distribute this database
for any purpose and without fee and royalty is hereby granted, provided that you
agree to comply with the Princeton copyright notice and statements, including
the disclaimer, and that the same appear on ALL copies of the database,
including modifications that you make for internal
use or for distribution.
Both LICENSE and README files distributed with the SemCor 1.6 package are
included in the current distribution of SemCor 3.0.
### Languages
English
## Additional Information
### Licensing Information
WordNet Release 1.6 Semantic Concordance Release 1.6
This software and database is being provided to you, the LICENSEE, by
Princeton University under the following license. By obtaining, using
and/or copying this software and database, you agree that you have
read, understood, and will comply with these terms and conditions.:
Permission to use, copy, modify and distribute this software and
database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with
the following copyright notice and statements, including the disclaimer,
and that the same appear on ALL copies of the software, database and
documentation, including modifications that you make for internal
use or for distribution.
WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR
OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in
advertising or publicity pertaining to distribution of the software
and/or database. Title to copyright in this software, database and
any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.
### Citation Information
```bibtex
@inproceedings{miller-etal-1993-semantic,
title = "A Semantic Concordance",
author = "Miller, George A. and
Leacock, Claudia and
Tengi, Randee and
Bunker, Ross T.",
booktitle = "{H}uman {L}anguage {T}echnology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993",
year = "1993",
url = "https://aclanthology.org/H93-1061",
}
```
### Contributions
Thanks to [@thesofakillers](https://github.com/thesofakillers) for adding this
dataset, converting from xml to csv.
| thesofakillers/SemCor | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other",
"word sense disambiguation",
"semcor",
"wordnet",
"region:us"
] | 2022-09-22T12:31:04+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "SemCor", "tags": ["word sense disambiguation", "semcor", "wordnet"]} | 2022-10-12T07:46:28+00:00 |
63aac2cc0638acf1d69b9e1fb0a1b615da567550 |
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email protected], [email protected]
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ',
'labels': [0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
13,
14,
14,
14,
14,
14,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
4,
4,
4,
4,
4,
0,
0,
0,
0,
5,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
0,
0,
3,
4,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
2,
2,
2,
2,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0]}
```
### Data Fields
- `text`: `str` of the text
- `label_ids` dictionary composed of list of strings on a character-level:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
### Data Splits
```python
DatasetDict({
train: Dataset({
features: ['text', 'labels'],
num_rows: 66085
})
test: Dataset({
features: ['text', 'labels'],
num_rows: 8225
})
validation: Dataset({
features: ['text', 'labels'],
num_rows: 7948
})
})
```
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
### Licensing Information
CC BY 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset. | EMBO/sd-character-level-ner | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-09-22T12:57:31+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification", "structure-prediction"], "task_ids": ["multi-class-classification", "named-entity-recognition", "parsing"]} | 2022-10-23T05:41:24+00:00 |
4a706ce4d084ae644acb17bac7fd0919e493dbeb |
# Dataset Card for Fashionpedia_4_categories
This dataset is a variation of the fashionpedia dataset available [here](https://huggingface.co/datasets/detection-datasets/fashionpedia), with 2 key differences:
- It contains only 4 categories:
- Clothing
- Shoes
- Bags
- Accessories
- New splits were created:
- Train: 90% of the images
- Val: 5%
- Test 5%
The goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset.
This dataset was created using the `detection_datasets` library ([GitHub](https://github.com/blinjrm/detection-datasets), [PyPI](https://pypi.org/project/detection-datasets/)), you can check here the full creation [notebook](https://blinjrm.github.io/detection-datasets/tutorials/2_Transform/).
In a nutshell, the following mapping was applied:
```Python
mapping = {
'shirt, blouse': 'clothing',
'top, t-shirt, sweatshirt': 'clothing',
'sweater': 'clothing',
'cardigan': 'clothing',
'jacket': 'clothing',
'vest': 'clothing',
'pants': 'clothing',
'shorts': 'clothing',
'skirt': 'clothing',
'coat': 'clothing',
'dress': 'clothing',
'jumpsuit': 'clothing',
'cape': 'clothing',
'glasses': 'accessories',
'hat': 'accessories',
'headband, head covering, hair accessory': 'accessories',
'tie': 'accessories',
'glove': 'accessories',
'belt': 'accessories',
'tights, stockings': 'accessories',
'sock': 'accessories',
'shoe': 'shoes',
'bag, wallet': 'bags',
'scarf': 'accessories',
}
```
As a result, annotations with no category equivalent in the mapping have been dropped. | detection-datasets/fashionpedia_4_categories | [
"task_categories:object-detection",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:fashionpedia",
"language:en",
"license:cc-by-4.0",
"object-detection",
"fashion",
"computer-vision",
"region:us"
] | 2022-09-22T13:09:27+00:00 | {"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["fashionpedia"], "task_categories": ["object-detection"], "paperswithcode_id": "fashionpedia", "pretty_name": "Fashionpedia_4_categories", "tags": ["object-detection", "fashion", "computer-vision"]} | 2022-09-22T13:45:18+00:00 |
2e7fdae1b8a959fa70bdadea392312869a02c744 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-6f9c29-1531855204 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T13:15:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["accuracy"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-22T14:17:52+00:00 |
ad46e5b6677b9bd3aa6368c688dac0fc30d5e4ca | Large file storage for the paper `Convergent Representations of Computer Programs in Human and Artificial Neural Networks` by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, and Una-May O'Reilly. The code repository is hosted on [GitHub](https://github.com/ALFA-group/code-representations-ml-brain). Check it out!
If you use this work, please cite:
```bibtex
@inproceedings{SrikantLipkin2022,
author = {Srikant, Shashank and Lipkin, Benjamin and Ivanova, Anna and Fedorenko, Evelina and O'Reilly, Una-May},
title = {Convergent Representations of Computer Programs in Human and Artificial Neural Networks},
year = {2022},
journal = {Advances in Neural Information Processing Systems},
}
``` | benlipkin/braincode-neurips2022 | [
"license:mit",
"region:us"
] | 2022-09-22T13:17:03+00:00 | {"license": "mit"} | 2022-09-22T16:24:45+00:00 |
caba75ded0756e6f559f383b667112a74578f55e | MadhuLokanath/New_Data | [
"license:apache-2.0",
"region:us"
] | 2022-09-22T13:32:22+00:00 | {"license": "apache-2.0"} | 2022-09-22T13:32:22+00:00 |
|
9623e24bcc3da5ec8a7ab5ed6b194294d6a18358 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2
* Dataset: samsum
* Config: samsum
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-61187c-1532155205 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-22T13:42:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-22T15:40:56+00:00 |
e7367bb69fc0a14d622f29f74d51efddea95b46a | GGWON/jnstyle | [
"license:afl-3.0",
"region:us"
] | 2022-09-22T14:29:18+00:00 | {"license": "afl-3.0"} | 2022-09-22T14:29:18+00:00 |
|
8178d8c493897dc0cf759dd21413c118c0423718 |
[source](https://github.com/wangle1218/KBQA-for-Diagnosis/tree/main/nlu/bert_intent_recognition/data) | nlp-guild/intent-recognition-biomedical | [
"license:mit",
"region:us"
] | 2022-09-22T15:10:30+00:00 | {"license": "mit"} | 2022-09-22T15:13:44+00:00 |
7eecec7624c6677ce4d20471785ab36a068da321 | Azarthehulk/hand_written_dataset | [
"license:other",
"region:us"
] | 2022-09-22T15:57:28+00:00 | {"license": "other"} | 2022-09-22T15:57:28+00:00 |
|
b1ff4f0b5abaadff2684a551d01334e4b2133d59 | aseem007/sd | [
"region:us"
] | 2022-09-22T17:43:10+00:00 | {} | 2022-11-06T13:10:58+00:00 |
|
6ec16181a1c4b5ed412c979adc8a4c05d6321ce9 | Theo89/teracotta | [
"license:artistic-2.0",
"region:us"
] | 2022-09-22T17:51:03+00:00 | {"license": "artistic-2.0"} | 2022-09-22T17:55:36+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.