sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
a5546b26a14869e8be1edca41bf1636f178984c0 | # Dataset Card for "celeb-identities"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jafdxc/celeb-identities | [
"region:us"
] | 2022-10-22T13:43:56+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "clarkson", "1": "freeman", "2": "jackie_chan", "3": "jennifer", "4": "serena"}}}}], "splits": [{"name": "train", "num_bytes": 1305982.0, "num_examples": 13}], "download_size": 1306199, "dataset_size": 1305982.0}} | 2022-10-22T13:44:10+00:00 |
9ee884078ecc63402fdbf63b023b5001280abadd |
## ESC benchmark diagnostic dataset
## Dataset Summary
As a part of ESC benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.
All eight datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
esc_diagnostic_ami = load_dataset("esc-benchmark/esc-diagnostic-dataset", "ami")
```
Datasets have two splits - `clean` nd `other`. To have clean diagnostic subset of AMI:
```python
ami_diagnostic_clean = esc_diagnostic_ami["clean"]
```
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(esc_diagnostic[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'audio': {'path': None,
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical',
'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical',
'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005',
'dataset': 'ami',
}
```
### Data Fields
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `ortho_transcript`: the orthographic transcription of the audio file.
- `norm_transcript`: the normalized transcription of the audio file.
- `id`: unique id of the data sample.
- `dataset`: string name of a dataset the sample belongs to.
### Data Preparation
#### Audio
The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring.
### Access
All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESC contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESC validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESC dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esc-bench/esc-diagnostic-dataset](https://huggingface.co/datasets/esc-bench/esc-diagnostic-datasets).
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esc-benchmark/esc-datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esc-benchmark/esc-datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esc-benchmark/esc-datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esc-benchmark/esc-datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` | esc-bench/esc-diagnostic-dataset | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:en",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"license:other",
"asr",
"benchmark",
"speech",
"esc",
"region:us"
] | 2022-10-22T13:47:33+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ESC Diagnostic Dataset", "tags": ["asr", "benchmark", "speech", "esc"], "extra_gated_prompt": "Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}} | 2022-10-25T11:34:26+00:00 |
9519590fcb7633171ee619a480fca47a6947bdbf | Nestor95/ME | [
"license:openrail",
"region:us"
] | 2022-10-22T14:16:25+00:00 | {"license": "openrail"} | 2022-12-01T10:55:00+00:00 |
|
a8adb09a4c77f4a9de9ec8bd0c55fcc589ba6464 |
[XSS](�javascript:alert(document.domain))
| orgbug/test | [
"license:apache-2.0",
"region:us"
] | 2022-10-22T15:19:48+00:00 | {"license": "apache-2.0"} | 2023-05-06T13:51:10+00:00 |
b5aa6b7cef550455223dc0e4faacaf8e5621447e | # not demo
alright
## Subheader
This is so cool!
| tiagoseca/raw_dre_corpus | [
"region:us"
] | 2022-10-22T17:53:42+00:00 | {} | 2022-11-02T12:37:09+00:00 |
1db722b3b2cdac83d6d8af9439a366e745964015 | aimagic/big5essay | [
"license:mit",
"region:us"
] | 2022-10-22T18:58:04+00:00 | {"license": "mit"} | 2022-10-22T18:59:47+00:00 |
|
328ac75de85373f41365238b2c9cdf1163c4945c | # Dataset Card for "lyrics_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nick-carroll1/lyrics_dataset | [
"region:us"
] | 2022-10-22T18:59:04+00:00 | {"dataset_info": {"features": [{"name": "Artist", "dtype": "string"}, {"name": "Song", "dtype": "string"}, {"name": "Lyrics", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 371464, "num_examples": 237}], "download_size": 166829, "dataset_size": 371464}} | 2022-10-23T16:56:11+00:00 |
c019b34c131cb6c4b5694f910961f72f6f147ba9 |
# Dataset Card for friends_data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Friends dataset consists of speech-based dialogue from the Friends TV sitcom. It is extracted from the [SocialNLP EmotionX 2019 challenge](https://sites.google.com/view/emotionx2019/datasets).
### Supported Tasks and Leaderboards
text-classification, sentiment-classification: The dataset is mainly used to predict a sentiment label given text input.
### Languages
The utterances are in English.
## Dataset Structure
### Data Instances
A data point containing text and the corresponding label.
An example from the friends_dataset looks like this:
{
'text': 'Well! Well! Well! Joey Tribbiani! So you came back huh?',
'label': 'surprise'
}
### Data Fields
The field includes a text column and a corresponding emotion label.
## Dataset Creation
### Curation Rationale
The dataset contains 1000 English-language dialogues originally in JSON files. The JSON file contains an array of dialogue objects. Each dialogue object is an array of line objects, and each line object contains speaker, utterance, emotion, and annotation strings.
{
"speaker": "Chandler",
"utterance": "My duties? All right.",
"emotion": "surprise",
"annotation": "2000030"
}
Utterance and emotion were extracted from the original files into a CSV file. The dataset was cleaned to remove non-neutral labels. This dataset was created to be used in fine-tuning an emotion sentiment classifier that can be useful to teach individuals with autism how to read facial expressions. | michellejieli/friends_dataset | [
"language:en",
"distilroberta",
"sentiment",
"emotion",
"twitter",
"reddit",
"region:us"
] | 2022-10-22T19:37:03+00:00 | {"language": "en", "tags": ["distilroberta", "sentiment", "emotion", "twitter", "reddit"]} | 2022-10-23T12:21:12+00:00 |
21f36f5e43fe7e7978326e4bc8f481b88266f1da | TomTBT/pmc_open_access_figure | [
"license:apache-2.0",
"region:us"
] | 2022-10-22T21:05:42+00:00 | {"license": "apache-2.0"} | 2024-01-11T22:06:36+00:00 |
|
f8d7080403cdd436f76256cfa60ca1ae64c8617d |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://www.tau-nlp.sites.tau.ac.il/compwebq
- **Repository:** https://github.com/alontalmor/WebAsKB
- **Paper:** https://arxiv.org/abs/1803.06643
- **Leaderboard:** https://www.tau-nlp.sites.tau.ac.il/compwebq-leaderboard
- **Point of Contact:** [email protected].
### Dataset Summary
**A dataset for answering complex questions that require reasoning over multiple web snippets**
ComplexWebQuestions is a new dataset that contains a large set of complex questions in natural language, and can be used in multiple ways:
- By interacting with a search engine, which is the focus of our paper (Talmor and Berant, 2018);
- As a reading comprehension task: we release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of our model;
- As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
QUESTION FILES
The dataset contains 34,689 examples divided into 27,734 train, 3,480 dev, 3,475 test.
each containing:
```
"ID”: The unique ID of the example;
"webqsp_ID": The original WebQuestionsSP ID from which the question was constructed;
"webqsp_question": The WebQuestionsSP Question from which the question was constructed;
"machine_question": The artificial complex question, before paraphrasing;
"question": The natural language complex question;
"sparql": Freebase SPARQL query for the question. Note that the SPARQL was constructed for the machine question, the actual question after paraphrasing
may differ from the SPARQL.
"compositionality_type": An estimation of the type of compositionally. {composition, conjunction, comparative, superlative}. The estimation has not been manually verified,
the question after paraphrasing may differ from this estimation.
"answers": a list of answers each containing answer: the actual answer; answer_id: the Freebase answer id; aliases: freebase extracted aliases for the answer.
"created": creation time
```
NOTE: test set does not contain “answer” field. For test evaluation please send email to
[email protected].
WEB SNIPPET FILES
The snippets files consist of 12,725,989 snippets each containing
PLEASE DON”T USE CHROME WHEN DOWNLOADING THESE FROM DROPBOX (THE UNZIP COULD FAIL)
"question_ID”: the ID of related question, containing at least 3 instances of the same ID (full question, split1, split2);
"question": The natural language complex question;
"web_query": Query sent to the search engine.
“split_source”: 'noisy supervision split' or ‘ptrnet split’, please train on examples containing “ptrnet split” when comparing to Split+Decomp from https://arxiv.org/abs/1807.09623
“split_type”: 'full_question' or ‘split_part1' or ‘split_part2’ please use ‘composition_answer’ in question of type composition and split_type: “split_part1” when training a reading comprehension model on splits as in Split+Decomp from https://arxiv.org/abs/1807.09623 (in the rest of the cases use the original answer).
"web_snippets": ~100 web snippets per query. Each snippet includes Title,Snippet. They are ordered according to Google results.
With a total of
10,035,571 training set snippets
1,350,950 dev set snippets
1,339,468 test set snippets
### Source Data
The original files can be found at this [dropbox link](https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AACuu4v3YNkhirzBOeeaHYala)
### Licensing Information
Not specified
### Citation Information
```
@inproceedings{talmor2018web,
title={The Web as a Knowledge-Base for Answering Complex Questions},
author={Talmor, Alon and Berant, Jonathan},
booktitle={Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)},
pages={641--651},
year={2018}
}
```
### Contributions
Thanks for [happen2me](https://github.com/happen2me) for contributing this dataset. | drt/complex_web_questions | [
"license:apache-2.0",
"arxiv:1803.06643",
"arxiv:1807.09623",
"region:us"
] | 2022-10-22T21:14:27+00:00 | {"license": "apache-2.0", "source": "https://github.com/KGQA/KGQA-datasets"} | 2023-04-27T20:04:50+00:00 |
1ede12140e260ae57927006045ec50e7fdf4da4b | Roderich/Elsa_prueba | [
"license:other",
"region:us"
] | 2022-10-22T21:22:49+00:00 | {"license": "other"} | 2022-10-22T21:25:31+00:00 |
|
a1ca710081a0cf551d68e8fa2e58cb24016bce11 | Escalibur/realSergio | [
"license:unknown",
"region:us"
] | 2022-10-22T21:36:47+00:00 | {"license": "unknown"} | 2022-10-22T21:37:26+00:00 |
|
fa56884038f5566930d101134cb74fc8912a92ee | # Dataset Card for "processed_narrative_relationship_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mfigurski80/processed_narrative_relationship_dataset | [
"region:us"
] | 2022-10-22T23:10:43+00:00 | {"dataset_info": {"features": [{"name": "subject", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "dialogue", "dtype": "string"}, {"name": "pair_examples", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 3410751.179531327, "num_examples": 15798}, {"name": "train", "num_bytes": 13642788.820468673, "num_examples": 63191}], "download_size": 9671733, "dataset_size": 17053540.0}} | 2022-11-01T01:00:16+00:00 |
d319bf747b93c448a96b0d00a2a485a336ee05e7 | nateraw/misc | [
"license:mit",
"region:us"
] | 2022-10-23T00:11:46+00:00 | {"license": "mit"} | 2024-01-15T21:28:50+00:00 |
|
a6895a95b21e1c435a01b40c6be3d7280a727f07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Aiyshwariya/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563161 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T01:39:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Aiyshwariya/bert-finetuned-squad", "metrics": ["squad", "bertscore"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-23T01:42:33+00:00 |
e8e49851544cde36cf86caec6e1e653e4cb56d42 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Neulvo/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563162 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T01:39:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Neulvo/bert-finetuned-squad", "metrics": ["squad", "bertscore"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-23T01:42:25+00:00 |
5da30b83882e79083ee59bd450c0ada0300a59d6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563163 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T01:39:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": ["squad", "bertscore"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-10-23T01:42:15+00:00 |
2a3bb2e3c5547c512306534192af06db7dc5d43b | kayt/finetuning | [
"region:us"
] | 2022-10-23T04:19:38+00:00 | {} | 2022-10-23T04:26:18+00:00 |
|
946b87cd3fe02ce0c8827b865d0f3a0340f8066a | huabin/momo | [
"license:c-uda",
"region:us"
] | 2022-10-23T04:41:31+00:00 | {"license": "c-uda"} | 2022-10-23T05:01:57+00:00 |
|
6d4e61b584aec1e2d29f95d05baa037b84f23825 | fourteenBDr/shiji | [
"license:apache-2.0",
"region:us"
] | 2022-10-23T08:04:10+00:00 | {"license": "apache-2.0"} | 2022-10-23T09:33:10+00:00 |
|
b825fdf740a1d6820c02e06a1d8741005f858612 | BridgeQZH/amagazine | [
"license:openrail",
"region:us"
] | 2022-10-23T09:55:03+00:00 | {"license": "openrail"} | 2022-10-29T19:56:57+00:00 |
|
567094ef0bf698519f811edb7bef6b629ec1beed | P22/beta-flower | [
"license:afl-3.0",
"region:us"
] | 2022-10-23T10:57:36+00:00 | {"license": "afl-3.0"} | 2022-10-23T10:58:59+00:00 |
|
d71cefadbbee8cdb4a2b09e9783de79ba3da242b | Rosenberg/genia | [
"license:mit",
"region:us"
] | 2022-10-23T11:07:06+00:00 | {"license": "mit"} | 2022-10-23T11:08:03+00:00 |
|
884b7444f79ed8f90b22ab80ee2469eb65b697cf |
# Dataset Card for VUA Metaphor Corpus
**Important note#1**: This is a slightly simplified but mostly complete parse of the corpus. What is missing are lemmas and some metadata that was not important at the time of writing the parser. See the section `Simplifications` for more information on this.
**Important note#2**: The dataset contains metadata - to ignore it and correctly remap the annotations, see the section `Discarding metadata`.
### Dataset Summary
VUA Metaphor Corpus (VUAMC) contains a selection of excerpts from BNC-Baby files that have been annotated for metaphor. There are four registers, each comprising about 50 000 words: academic texts, news texts, fiction, and conversations.
Words have been separately labelled as participating in multi-word expressions (about 1.5%) or as discarded for metaphor analysis (0.02%). Main categories include words that are related to metaphor (MRW), words that signal metaphor (MFlag), and words that are not related to metaphor. For metaphor-related words, subdivisions have been made between clear cases of metaphor versus borderline cases (WIDLII, When In Doubt, Leave It In). Another parameter of metaphor-related words makes a distinction between direct metaphor, indirect metaphor, and implicit metaphor.
### Supported Tasks and Leaderboards
Metaphor detection, metaphor type classification.
### Languages
English.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'document_name': 'kcv-fragment42',
'words': ['', 'I', 'think', 'we', 'should', 'have', 'different', 'holidays', '.'],
'pos_tags': ['N/A', 'PNP', 'VVB', 'PNP', 'VM0', 'VHI', 'AJ0', 'NN2', 'PUN'],
'met_type': [
{'type': 'mrw/met', 'word_indices': [5]}
],
'meta': ['vocal/laugh', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A', 'N/A']
}
```
### Data Fields
The instances are ordered as they appear in the corpus.
- `document_name`: a string containing the name of the document in which the sentence appears;
- `words`: words in the sentence (`""` when the word represents metadata);
- `pos_tags`: POS tags of the words, encoded using the BNC basic tagset (`"N/A"` when the word does not have an associated POS tag);
- `met_type`: metaphors in the sentence, marked by their type and word indices;
- `meta`: selected metadata tags providing additional context to the sentence. Metadata may not correspond to a specific word. In this case, the metadata is represented with an empty string (`""`) in `words` and a `"N/A"` tag in `pos_tags`.
## Dataset Creation
For detailed information on the corpus, please check out the references in the `Citation Information` section or contact the dataset authors.
## Simplifications
The raw corpus is equipped with rich metadata and encoded in the TEI XML format. The textual part is fully parsed except for the lemmas, i.e. all the sentences in the raw corpus are present in the dataset.
However, parsing the metadata fully is unnecessarily tedious, so certain simplifications were made:
- paragraph information is not preserved as the dataset is parsed at sentence level;
- manual corrections (`<corr>`) of incorrectly written words are ignored, and the original, incorrect form of the words is used instead;
- `<ptr>` and `<anchor>` tags are ignored as I cannot figure out what they represent;
- the attributes `rendition` (in `<hi>` tags) and `new` (in `<shift>` tags) are not exposed.
## Discarding metadata
The dataset contains rich metadata, which is stored in the `meta` attribute. To keep data aligned, empty words or `"N/A"`s are inserted into the other attributes. If you want to ignore the metadata and correct the metaphor type annotations, you can use code similar to the following snippet:
```python3
data = datasets.load_dataset("matejklemen/vuamc")["train"]
data = data.to_pandas()
for idx_ex in range(data.shape[0]):
curr_ex = data.iloc[idx_ex]
idx_remap = {}
for idx_word, word in enumerate(curr_ex["words"]):
if len(word) != 0:
idx_remap[idx_word] = len(idx_remap)
# Note that lists are stored as np arrays by datasets, while we are storing new data in a list!
# (unhandled for simplicity)
words, pos_tags, met_type = curr_ex[["words", "pos_tags", "met_type"]].tolist()
if len(idx_remap) != len(curr_ex["words"]):
words = list(filter(lambda _word: len(_word) > 0, curr_ex["words"]))
pos_tags = list(filter(lambda _pos: _pos != "N/A", curr_ex["pos_tags"]))
met_type = []
for met_info in curr_ex["met_type"]:
met_type.append({
"type": met_info["type"],
"word_indices": list(map(lambda _i: idx_remap[_i], met_info["word_indices"]))
})
```
## Additional Information
### Dataset Curators
Gerard Steen; et al. (please see http://hdl.handle.net/20.500.12024/2541 for the full list).
### Licensing Information
Available for non-commercial use on condition that the terms of the [BNC Licence](http://www.natcorp.ox.ac.uk/docs/licence.html) are observed and that this header is included in its entirety with any copy distributed.
### Citation Information
```
@book{steen2010method,
title={A method for linguistic metaphor identification: From MIP to MIPVU},
author={Steen, Gerard and Dorst, Lettie and Herrmann, J. and Kaal, Anna and Krennmayr, Tina and Pasma, Trijntje},
volume={14},
year={2010},
publisher={John Benjamins Publishing}
}
```
```
@inproceedings{leong-etal-2020-report,
title = "A Report on the 2020 {VUA} and {TOEFL} Metaphor Detection Shared Task",
author = "Leong, Chee Wee (Ben) and
Beigman Klebanov, Beata and
Hamill, Chris and
Stemle, Egon and
Ubale, Rutuja and
Chen, Xianyang",
booktitle = "Proceedings of the Second Workshop on Figurative Language Processing",
year = "2020",
url = "https://aclanthology.org/2020.figlang-1.3",
doi = "10.18653/v1/2020.figlang-1.3",
pages = "18--29"
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
| matejklemen/vuamc | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"language:en",
"license:other",
"metaphor-classification",
"multiword-expression-detection",
"vua20",
"vua18",
"mipvu",
"region:us"
] | 2022-10-23T11:13:08+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "100K<n<1M"], "source_datasets": [], "task_categories": ["text-classification", "token-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "VUA Metaphor Corpus", "tags": ["metaphor-classification", "multiword-expression-detection", "vua20", "vua18", "mipvu"]} | 2022-10-26T07:50:42+00:00 |
681708c46bb571d716afbc9501c1fbd96c530ab6 | Rosenberg/conll2003 | [
"license:mit",
"region:us"
] | 2022-10-23T11:40:22+00:00 | {"license": "mit"} | 2022-10-23T11:41:04+00:00 |
|
0159c148e6fbd59f3a162659dc69edf3758990a1 | Rosenberg/weibo_ner | [
"license:mit",
"region:us"
] | 2022-10-23T11:41:57+00:00 | {"license": "mit"} | 2022-10-25T11:29:55+00:00 |
|
be332aad11c0eef08bfc2b33152db0cdbc5d0279 | Rosenberg/CMeEE-V2 | [
"license:mit",
"region:us"
] | 2022-10-23T11:45:10+00:00 | {"license": "mit"} | 2022-10-23T11:59:54+00:00 |
|
0525152d16dae786e45826176cdbdddb38b014e1 | Rosenberg/fewnerd | [
"license:mit",
"region:us"
] | 2022-10-23T11:47:42+00:00 | {"license": "mit"} | 2022-10-23T11:50:58+00:00 |
|
37e794182444d026f446e74817542966acff4820 | Rosenberg/zhmsra | [
"license:mit",
"region:us"
] | 2022-10-23T11:53:05+00:00 | {"license": "mit"} | 2022-10-23T11:54:03+00:00 |
|
eeabc326f3e66f46a953eeafd74803f202abb477 | Rosenberg/IMCS-NER | [
"license:mit",
"region:us"
] | 2022-10-23T11:54:35+00:00 | {"license": "mit"} | 2022-10-23T11:58:29+00:00 |
|
465c9faaaccf6e94e255d43a697e84934e4a12c1 | Rosenberg/IMCS-V2-NER | [
"license:mit",
"region:us"
] | 2022-10-23T11:58:45+00:00 | {"license": "mit"} | 2022-10-23T11:59:16+00:00 |
|
92a3261f9e48952f61dcb2b8e8e2458d0f6e39ba | Rosenberg/CHIP-CDEE | [
"license:mit",
"region:us"
] | 2022-10-23T12:01:15+00:00 | {"license": "mit"} | 2022-10-23T12:01:29+00:00 |
|
fda10a03481b22937330ef0283637b830d74a4df | Rosenberg/CMeIE | [
"license:mit",
"region:us"
] | 2022-10-23T12:01:49+00:00 | {"license": "mit"} | 2022-10-23T12:02:07+00:00 |
|
ec32b2edfa4d31849d4f02aa301b880f45fc23e5 | Rosenberg/nyt_star | [
"license:mit",
"region:us"
] | 2022-10-23T12:06:51+00:00 | {"license": "mit"} | 2022-10-23T12:07:40+00:00 |
|
da5cc75c8b9eccfa77a5eca281ab7f960b8438de | Rosenberg/webnlg | [
"license:mit",
"region:us"
] | 2022-10-23T12:08:25+00:00 | {"license": "mit"} | 2022-10-23T12:08:47+00:00 |
|
c187dbf4487b85d586cba7e9d39a4b7860f08ffc | Rosenberg/webnlg_star | [
"license:mit",
"region:us"
] | 2022-10-23T12:09:08+00:00 | {"license": "mit"} | 2022-10-23T12:09:22+00:00 |
|
f41e7a9ef4b6efb3b0593771ffa80b8fb7851a2c | gisbornetv/teseting | [
"license:afl-3.0",
"region:us"
] | 2022-10-23T15:06:04+00:00 | {"license": "afl-3.0"} | 2022-10-23T15:06:04+00:00 |
|
653ad516164c7f80662f71fded1c3c6c5d37c13a | ArteChile/footos | [
"license:artistic-2.0",
"region:us"
] | 2022-10-23T16:31:50+00:00 | {"license": "artistic-2.0"} | 2022-10-23T16:38:01+00:00 |
|
f521d71ad8871bfe07d1b7f809c38ed578d79f93 |
# Space Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by space_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/flz5Oxz.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/5btpoXs.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/PtySCd4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NbSue9H.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/QhjRezm.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
| Nerfgun3/space_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-23T17:10:11+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-24T18:39:57+00:00 |
131a50f9074579632723246dc3a15b42323852b1 | mozay22/heart_disease | [
"license:other",
"region:us"
] | 2022-10-23T17:18:47+00:00 | {"license": "other"} | 2022-11-15T13:10:27+00:00 |
|
e75c0ba2a7b8754214c22b71ed4ab002e518d665 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v1 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-10-23T18:59:44+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-10-24T12:16:43+00:00 |
7a108fbda32cda49a9a25ae914817723b0934e36 | jeffdshen/redefine_math2_8shot | [
"license:cc-by-2.0",
"region:us"
] | 2022-10-23T19:14:00+00:00 | {"license": "cc-by-2.0"} | 2022-10-23T19:15:28+00:00 |
|
fa3b315810609649398e22125a46364aae950dce | jeffdshen/redefine_math0_8shot | [
"license:cc-by-2.0",
"region:us"
] | 2022-10-23T19:16:12+00:00 | {"license": "cc-by-2.0"} | 2022-10-23T19:17:15+00:00 |
|
d479875e3aa40d524f67059a1d8ed5d56b6141a6 | jeffdshen/neqa0_8shot | [
"license:cc-by-2.0",
"region:us"
] | 2022-10-23T19:17:37+00:00 | {"license": "cc-by-2.0"} | 2022-10-23T19:18:00+00:00 |
|
15de2e240c01577b58f949d06d419f18bfcd1563 | jeffdshen/neqa2_8shot | [
"license:cc-by-2.0",
"region:us"
] | 2022-10-23T19:19:15+00:00 | {"license": "cc-by-2.0"} | 2022-10-23T19:19:39+00:00 |
|
c32097a5d2fbede13730983eb51d2b5defc2df72 |
# Flower Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/flower_style/resolve/main/flower_style_showcase.jpg"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by flower_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/flower_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2022-10-23T19:34:36+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/flower_style/resolve/main/flower_style_showcase.jpg", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2022-11-17T13:54:16+00:00 |
bbbeda405dd254bbc39be64fd07ca56e9c42722a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963397 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T01:24:00+00:00 |
628102b7e82b9a387a255a6e51170e64a7674645 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963393 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:17:44+00:00 |
165ecd1b7528c0a28047f431599ec63ccc225ba5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063400 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:05:23+00:00 |
e2501deb7ee46551f0d545d7cc9d08c205bddd94 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963391 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:04:17+00:00 |
386f0520a81bc2e006e403d88b0e58a25b7edceb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963392 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:07:19+00:00 |
5493c393d6b927541a9bb351bfe46ce48a363ad2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963394 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:31:31+00:00 |
378354c50946fbf08d8a6563e5da4f69b05f57e1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963395 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:16:35+00:00 |
3beafd757977584c5a7b0426b2025d14a12b872d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963396 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:56:59+00:00 |
37caa5b64dbc5c3649fb79afa9d8ac337cacf4df | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063399 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:02:53+00:00 |
0d4bc186a5d5a1dc46d0e0206ed53c204f882a88 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063402 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:19:17+00:00 |
b8e140cc5b8866a23c246f84785adce295792c8f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063401 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:14:09+00:00 |
728799a60277cd443045c7d19c40d4191162e20e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063403 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T19:59:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:45:29+00:00 |
8772c16f195f7f98be77d04eee7b64f965607ffd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa0_8shot
* Config: jeffdshen--neqa0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa0_8shot-jeffdshen__neqa0_8shot-5a61bc-1852963398 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:00:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa0_8shot", "dataset_config": "jeffdshen--neqa0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T07:46:56+00:00 |
ed6362992ac70b04bf6de9b9707127ed9a81913b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063404 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:00:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:21:29+00:00 |
451b95597d5a98802f91f65acc9185402c4456ef | rhe-rhf/dataset | [
"license:openrail",
"region:us"
] | 2022-10-23T20:00:14+00:00 | {"license": "openrail"} | 2022-10-23T20:00:14+00:00 |
|
065a794edae01a21ecc4da42eba9271432d2c9de | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063405 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:00:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T23:35:42+00:00 |
894d51ef8e444360826fef970442b4b6e882ff64 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063406 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:09:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T03:31:40+00:00 |
1acb7b8cd33ab32069f18e4b3bda902ee86cd7b1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163407 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:10:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:13:41+00:00 |
c5c85b748f0add69a515584101f75d31a23c3eec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163408 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:11:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:17:23+00:00 |
d1b0e19328570ff6d6b66feb6f1f1d49cc2586a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163409 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:13:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:23:27+00:00 |
29878dfab55f73640bd769dda9097009ba88cac7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163411 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:20:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:54:20+00:00 |
3d4e995498c994515671fe0ffa35466db46aa819 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163410 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:20:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:36:03+00:00 |
9774214c388611978defa2b05f2cbb6eafc83ef6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163412 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:23:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:27:37+00:00 |
2d685476ba41df49df84ce83869ec97f2c48a09d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163413 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:23:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T23:15:46+00:00 |
003cddb5c422851a1ed82a771e069487afd0dbe5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163414 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:25:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T02:25:10+00:00 |
80806d78f92ead5ac7d7b71e0aad69d63da69144 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v0 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-10-23T20:27:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-10-23T23:55:55+00:00 |
71a7df4dec587db7ca75e77e17820f934b9239ee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263415 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:29:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:32:52+00:00 |
8708ce52df013e02ce64fa1d724dd9658fbe0337 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263417 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:39:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:55:09+00:00 |
c79968e3486c761ac1dc22e70ef3543566a865d8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263416 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:39:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:45:45+00:00 |
21d6d506cd6554ed5d501ecf3ff9057e3cee19ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263418 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:43:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:09:30+00:00 |
45863e98e30abf429c3674f303b30e6b12a96c49 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263420 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:51:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T22:26:46+00:00 |
9afc868b3ca6999fce836cdddbf46b9a034dcb9a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263419 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:51:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:49:05+00:00 |
5b2acfeeae4274be62c8f9a05acea1b1b33b63b8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263421 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:00:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T01:54:44+00:00 |
f87ed8be2923f9a467f70386ba48da3cab41992f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263422 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:01:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T05:32:10+00:00 |
afaaca07fb88eeecf10689a1b9c35b2a143dd599 | # Dataset Card for "malicious_urls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joshtobin/malicious_urls | [
"region:us"
] | 2022-10-23T22:02:35+00:00 | {"dataset_info": {"features": [{"name": "url_len", "dtype": "int64"}, {"name": "abnormal_url", "dtype": "int64"}, {"name": "https", "dtype": "int64"}, {"name": "digits", "dtype": "int64"}, {"name": "letters", "dtype": "int64"}, {"name": "shortening_service", "dtype": "int64"}, {"name": "ip_address", "dtype": "int64"}, {"name": "@", "dtype": "int64"}, {"name": "?", "dtype": "int64"}, {"name": "-", "dtype": "int64"}, {"name": "=", "dtype": "int64"}, {"name": ".", "dtype": "int64"}, {"name": "#", "dtype": "int64"}, {"name": "%", "dtype": "int64"}, {"name": "+", "dtype": "int64"}, {"name": "$", "dtype": "int64"}, {"name": "!", "dtype": "int64"}, {"name": "*", "dtype": "int64"}, {"name": ",", "dtype": "int64"}, {"name": "//", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 32000, "num_examples": 200}], "download_size": 9837, "dataset_size": 32000}} | 2022-10-23T22:28:01+00:00 |
44a2b42b814f780978d8361080fd108504ad31b2 | salascorp/prueba2 | [
"region:us"
] | 2022-10-23T22:14:51+00:00 | {} | 2022-10-23T22:15:03+00:00 |
|
4b2859096f19a75f613a7a63183a9fadaa48ba3f |
# Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Chinese captions are translated by [Deepl](https://www.deepl.com/translator) | svjack/pokemon-blip-captions-en-zh | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"language:zh",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-24T00:59:52+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en", "zh"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": []} | 2022-10-31T06:23:03+00:00 |
516ffa2561b51edf85c47b390162cbfc5a117710 | JesusMaginge/modelo.de.entrenamiento | [
"license:openrail",
"region:us"
] | 2022-10-24T01:01:45+00:00 | {"license": "openrail"} | 2022-10-24T01:04:28+00:00 |
|
8675196154344395b65903c074a56404326f0945 | ionghin/digimon-blip-captions | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-24T01:31:05+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-10-24T01:31:17+00:00 |
|
aae7557e27746477eb8c0ddb5af04f104edd5f87 | jaimebw/test | [
"license:mit",
"region:us"
] | 2022-10-24T02:42:18+00:00 | {"license": "mit"} | 2022-10-24T02:42:18+00:00 |
|
9abc51ee7903424ffb971297608aa6d3d0de3bfa | declare-lab/MELD | [
"license:gpl-3.0",
"region:us"
] | 2022-10-24T03:09:55+00:00 | {"license": "gpl-3.0"} | 2022-10-24T03:48:06+00:00 |
|
4db7ef36ef8e11123394ee61198a79ad5f09b87a | SDbiaseval/embeddings | [
"license:apache-2.0",
"region:us"
] | 2022-10-24T05:09:54+00:00 | {"license": "apache-2.0"} | 2022-12-20T20:37:50+00:00 |
|
933c432110089d30a0db7225598f9977e0055de4 | dustflover/rebecca | [
"license:unknown",
"region:us"
] | 2022-10-24T05:24:28+00:00 | {"license": "unknown"} | 2022-10-24T23:29:13+00:00 |
|
da73ea4e703a8eef8b4b6172a2a258a28079851a | Damitrius/Tester | [
"license:unknown",
"region:us"
] | 2022-10-24T06:17:44+00:00 | {"license": "unknown"} | 2022-10-24T06:17:44+00:00 |
|
9b1bd372799bcb31783210c1ec8f93ff45db4d7c | paraphraser/first_data | [
"license:other",
"region:us"
] | 2022-10-24T07:12:57+00:00 | {"license": "other"} | 2022-10-24T07:12:57+00:00 |
|
08ef5a71e9a1381eb205610dda214a5b01e3e55a | # Dataset Card for "speechocean762_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762_train | [
"region:us"
] | 2022-10-24T07:57:13+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 290407029.0, "num_examples": 2500}], "download_size": 316008757, "dataset_size": 290407029.0}} | 2022-10-24T07:58:04+00:00 |
7d9d2774a2abed6351ffaddbee0fdb34d7196457 |
# Dataset Card for InfantBooks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://www.mpi-inf.mpg.de/children-texts-for-commonsense](https://www.mpi-inf.mpg.de/children-texts-for-commonsense)
- **Paper:** Do Children Texts Hold The Key To Commonsense Knowledge?
### Dataset Summary
A dataset of infants/children's books.
### Languages
All the books are in English;
## Dataset Structure
### Data Instances
malis-friend_BookDash-FKB.txt,"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself..."
### Data Fields
- title: The title of the book
- content: The content of the book
## Dataset Creation
### Curation Rationale
The goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.
### Source Data
#### Initial Data Collection and Normalization
We automatically collected kids' books on the web.
#### Who are the source language producers?
Native speakers.
### Citation Information
```
Romero, J., & Razniewski, S. (2022).
Do Children Texts Hold The Key To Commonsense Knowledge?
In Proceedings of the 2022 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.
```
| Aunsiels/InfantBooks | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:gpl",
"research paper",
"kids",
"children",
"books",
"region:us"
] | 2022-10-24T07:57:35+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["gpl"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "InfantBooks", "tags": ["research paper", "kids", "children", "books"]} | 2022-10-24T10:20:01+00:00 |
d317974c2e9cf1b847048c49f36760808b2337f6 | # Dataset Card for "speechocean762_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762_test | [
"region:us"
] | 2022-10-24T07:58:05+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 288402967.0, "num_examples": 2500}], "download_size": 295709940, "dataset_size": 288402967.0}} | 2022-10-24T07:58:50+00:00 |
8d49c25cba65077c093016cbed51e087f88af77c | # Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: https://github.com/jimbozhang/speechocean762
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762 | [
"region:us"
] | 2022-10-24T08:12:33+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 288402967.0, "num_examples": 2500}, {"name": "train", "num_bytes": 290407029.0, "num_examples": 2500}], "download_size": 0, "dataset_size": 578809996.0}} | 2022-10-24T08:43:54+00:00 |
c03ad050756db3748209f1a51ba4b8afc8dcefcb |
# Dataset Card for Parafraseja
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of pairs of sentences labelled with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* dev.json: 2,000 examples
* test.json: 4,000 examples
* train.json: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
| projecte-aina/Parafraseja | [
"task_categories:text-classification",
"task_ids:multi-input-text-classification",
"annotations_creators:CLiC-UB",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-24T08:54:42+00:00 | {"annotations_creators": ["CLiC-UB"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["multi-input-text-classification"], "pretty_name": "Parafraseja"} | 2023-11-25T06:09:20+00:00 |
360fa369dc9acc720e69e036a1d3a0e88936e088 | KETI-AIR/aihub_summary_and_report | [
"license:apache-2.0",
"region:us"
] | 2022-10-24T09:07:30+00:00 | {"license": "apache-2.0"} | 2022-10-31T06:08:09+00:00 |
|
6af8474d307a30b92b0cc8d550dbf98f4f5d3c85 | # AutoTrain Dataset for project: dragino-7-7-max_495m
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dragino-7-7-max_495m.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_rssi": -91,
"feat_snr": 7.5,
"target": 125.0
},
{
"feat_rssi": -96,
"feat_snr": 5.0,
"target": 125.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_rssi": "Value(dtype='int64', id=None)",
"feat_snr": "Value(dtype='float64', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 853 |
| valid | 286 |
| pcoloc/autotrain-data-dragino-7-7-max_495m | [
"region:us"
] | 2022-10-24T09:08:48+00:00 | {} | 2022-10-24T09:10:04+00:00 |
43a4ac0c18bdd53bd8acc72323296b48339dc121 |
All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
librispeech = load_dataset("esb/datasets", "librispeech", split="train")
```
- `"esb/datasets"`: the repository namespace. This is fixed for all ESB datasets.
- `"librispeech"`: the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
- `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(librispeech[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'dataset': 'librispeech',
'audio': {'path': '/home/sanchit-gandhi/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
```
### Data Fields
- `dataset`: name of the ESB dataset from which the sample is taken.
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text`: the transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esb/leaderboard for scoring.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esb/diagnostic-dataset](https://huggingface.co/datasets/esb/diagnostic-dataset).
## Summary of ESB Datasets
| Dataset | Domain | Speaking Style | Train (h) | Dev (h) | Test (h) | Transcriptions | License |
|--------------|-----------------------------|-----------------------|-----------|---------|----------|--------------------|-----------------|
| LibriSpeech | Audiobook | Narrated | 960 | 11 | 11 | Normalised | CC-BY-4.0 |
| Common Voice | Wikipedia | Narrated | 1409 | 27 | 27 | Punctuated & Cased | CC0-1.0 |
| Voxpopuli | European Parliament | Oratory | 523 | 5 | 5 | Punctuated | CC0 |
| TED-LIUM | TED talks | Oratory | 454 | 2 | 3 | Normalised | CC-BY-NC-ND 3.0 |
| GigaSpeech | Audiobook, podcast, YouTube | Narrated, spontaneous | 2500 | 12 | 40 | Punctuated | apache-2.0 |
| SPGISpeech | Fincancial meetings | Oratory, spontaneous | 4900 | 100 | 100 | Punctuated & Cased | User Agreement |
| Earnings-22 | Fincancial meetings | Oratory, spontaneous | 105 | 5 | 5 | Punctuated & Cased | CC-BY-SA-4.0 |
| AMI | Meetings | Spontaneous | 78 | 9 | 9 | Punctuated & Cased | CC-BY-4.0 |
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esb/datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esb/datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esb/datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esb/datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esb/datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esb/datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esb/datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esb/datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esb/datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` | esb/datasets | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:en",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"license:other",
"asr",
"benchmark",
"speech",
"esb",
"region:us"
] | 2022-10-24T09:53:50+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "datasets", "tags": ["asr", "benchmark", "speech", "esb"], "extra_gated_prompt": "Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}} | 2023-01-16T17:51:39+00:00 |
38506bb37ab1b2a64cccec06ca1318b76ed8a2b2 |
# Dataset Card for GuiaCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by [GuiaCat](https://guiacat.cat) and curated by the BSC.
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
```
7,7,7,7,7.0,"Aquest restaurant té una llarga història. Ara han tornat a canviar d'amos i aquest canvi s'ha vist molt repercutit en la carta, preus, servei, etc. Hi ha molta varietat de menjar, i tot boníssim, amb especialitats molt ben trobades. El servei molt càlid i agradable, dóna gust que et serveixin així. I la decoració molt agradable també, bastant curiosa. En fi, pel meu gust, un bon restaurant i bé de preu.",bo
8,9,8,7,8.0,"Molt recomanable en tots els sentits. El servei és molt atent, pulcre i gens agobiant; alhora els plats també presenten un aspecte acurat, cosa que fa, juntament amb l'ambient, que t'oblidis de que, malauradament, està situat pròxim a l'autopista.Com deia, l'ambient és molt acollidor, té un menjador principal molt elegant, perfecte per quedar bé amb tothom!Tot i això, destacar la bona calitat / preu, ja que aquest restaurant té una carta molt extensa en totes les branques i completa, tant de menjar com de vins. Pel qui entengui de vins, podriem dir que tot i tenir una carta molt rica, es recolza una mica en els clàssics.",molt bo
```
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* dev.csv: 500 examples
* test.csv: 500 examples
* train.csv: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by [GuiaCat](https://guiacat.cat).
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
```
### Contributions
We want to thank GuiaCat for providing this data.
| projecte-aina/GuiaCat | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-24T10:11:31+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "sentiment-scoring"], "pretty_name": "GuiaCat"} | 2023-11-25T06:27:37+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.