sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
f61ed293f976a0b025b4004c06d41b12ecfd799c | About Dataset
This is a subset of the ArXiv dataset from Kaggle
https://www.kaggle.com/datasets/Cornell-University/arxiv
About ArXiv
For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth.
In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, we present a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more.
Our hope is to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
The dataset is freely available via Google Cloud Storage buckets (more info here). Stay tuned for weekly updates to the dataset!
ArXiv is a collaboratively funded, community-supported resource founded by Paul Ginsparg in 1991 and maintained and operated by Cornell University.
The release of this dataset was featured further in a Kaggle blog post here.
See here for more information.
ArXiv On Kaggle
Metadata
This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. This file contains an entry for each paper, containing:
id: ArXiv ID (can be used to access the paper, see below)
submitter: Who submitted the paper
authors: Authors of the paper
title: Title of the paper
comments: Additional info, such as number of pages and figures
journal-ref: Information about the journal the paper was published in
doi: [https://www.doi.org](Digital Object Identifier)
abstract: The abstract of the paper
categories: Categories / tags in the ArXiv system
versions: A version history
You can access each paper directly on ArXiv using these links:
https://arxiv.org/abs/{id}: Page for this paper including its abstract and further links
https://arxiv.org/pdf/{id}: Direct link to download the PDF
License
Creative Commons CC0 1.0 Universal Public Domain Dedication applies to the metadata in this dataset. See https://arxiv.org/help/license for further details and licensing on individual papers.
Acknowledgements
The original data is maintained by ArXiv, huge thanks to the team for building and maintaining this dataset.
We're using https://github.com/mattbierbaum/arxiv-public-datasets to pull the original data, thanks to Matt Bierbaum for providing this tool. | mwitiderrick/arXiv | [
"license:cc0-1.0",
"region:us"
]
| 2022-12-07T10:30:20+00:00 | {"license": "cc0-1.0"} | 2022-12-07T10:46:56+00:00 |
d9f5eae4f7a61c89806edec33e450230b880091e | # Dataset Card for "un_multi-ar-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Shularp/un_multi-ar-en | [
"region:us"
]
| 2022-12-07T10:56:27+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["ar", "en"]}}}], "splits": [{"name": "train", "num_bytes": 4189844561, "num_examples": 9759125}], "download_size": 1926773979, "dataset_size": 4189844561}} | 2022-12-07T11:00:47+00:00 |
11ee3c4fb0c17065c5b6e12802b44d4cc270bb3c | tilos/ASR-CCANTCSC | [
"language:zh",
"license:cc-by-nc-nd-4.0",
"region:us"
]
| 2022-12-07T11:13:27+00:00 | {"language": ["zh"], "license": "cc-by-nc-nd-4.0", "pretty_name": "ASR-CCANTCSC", "dataset_info": {"features": [{"name": "audio", "dtype": "Audio"}, {"name": "sentence", "dtype": "string"}]}} | 2022-12-07T21:39:11+00:00 |
|
11ab6c4fb880495f2e3ef4298b57e36cd0b2cfb5 | # Dataset Card for "medical-keywords"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Medical transcription data scraped from mtsamples.com
Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples.
This dataset contains sample medical transcriptions for various medical specialties.
### Languages
english
### Citation Information
Acknowledgements
Medical transcription data scraped from mtsamples.com
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. | argilla/medical-keywords | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"region:us"
]
| 2022-12-07T11:49:17+00:00 | {"language": ["en"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["keyphrase-extraction", "named-entity-recognition"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "prediction", "list": [{"name": "end", "dtype": "int64"}, {"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "start", "dtype": "int64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "null"}, {"name": "annotation_agent", "dtype": "null"}, {"name": "id", "dtype": "null"}, {"name": "metadata", "struct": [{"name": "medical_specialty", "dtype": "string"}]}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "timestamp[us]"}, {"name": "metrics", "dtype": "null"}], "splits": [{"name": "train", "num_bytes": 58986555, "num_examples": 148699}], "download_size": 17498377, "dataset_size": 58986555}} | 2022-12-07T12:00:34+00:00 |
a27b278e796e2407621dd46ba89a7be93e5ece49 | ywan111/macbook-dataset | [
"license:apache-2.0",
"region:us"
]
| 2022-12-07T11:53:54+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 82755.0, "num_examples": 1}], "download_size": 83469, "dataset_size": 82755.0}} | 2022-12-07T12:04:17+00:00 |
|
660af60fe25551275f1f67914a152ed359e8b935 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/minilm-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-e0ea2e-17426357 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:27:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/minilm-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-07T12:27:54+00:00 |
782f20d6e116a526d592b5cf249d429dee73719f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/sagemaker-distilbert-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-e0ea2e-17426358 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:27:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-07T12:28:22+00:00 |
c16fc003e832c68a86ce8f451915f04ac3883817 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-e0ea2e-17426359 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:27:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewiswatson/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-12-07T12:28:20+00:00 |
3c96f3e1bb9fe73d0100d8680b5e84f6984c2186 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/roberta-base-squad2
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__squad-sample-autoevaluate__squad-sample-778ba0-17436361 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:28:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/roberta-base-squad2", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-07T12:28:50+00:00 |
e617dfb7c33abc745d247e8155215edcf32b3a8d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__squad-sample-autoevaluate__squad-sample-778ba0-17436362 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:28:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-07T12:28:43+00:00 |
aa68f82199d3686be94fe56eb7097dc979573a6a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering-not-evaluated
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-autoevaluate__squad-sample-autoevaluate__squad-sample-778ba0-17436360 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-07T12:28:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering-not-evaluated", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-07T12:28:44+00:00 |
d4d9df3622d971b397413d498510800bd65877d7 | peepoo112/hoi4portraits | [
"region:us"
]
| 2022-12-07T12:47:37+00:00 | {} | 2022-12-07T12:47:46+00:00 |
|
41aef9650901fa2169213f38d49bd6491eb9b77a |
# CLUE-NER 命名实体识别数据集
字段说明
+ `text`: 文本
+ `entities`: 文本中包含的实体
+ `id`: 实体 `id`
+ `entity`: 实体对应的字符串
+ `start_offset`: 实体开始位置
+ `end_offset`: 实体结束位置的下一位
+ `label`: 实体对应的开始位置
| xusenlin/clue-ner | [
"language:zh",
"license:apache-2.0",
"named entity recognition",
"clue",
"region:us"
]
| 2022-12-07T13:14:03+00:00 | {"language": ["zh"], "license": "apache-2.0", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entities", "list": [{"name": "id", "dtype": "int64"}, {"name": "entity", "dtype": "string"}, {"name": "start_offset", "dtype": "int64"}, {"name": "end_offset", "dtype": "int64"}, {"name": "label", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2443356, "num_examples": 10748}, {"name": "test", "num_bytes": 154492, "num_examples": 1345}, {"name": "validation", "num_bytes": 309106, "num_examples": 1343}], "download_size": 1658426, "dataset_size": 2906954}, "tags": ["named entity recognition", "clue"]} | 2022-12-07T14:22:37+00:00 |
f3208919bc8a6695b6612a77e990db4180c0b000 | # Dataset Card for "arxiv-pyserini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/arxiv-pyserini | [
"region:us"
]
| 2022-12-07T13:27:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "submitter", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "journal-ref", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "report-no", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "versions", "list": [{"name": "created", "dtype": "string"}, {"name": "version", "dtype": "string"}]}, {"name": "update_date", "dtype": "string"}, {"name": "authors_parsed", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 3217788413, "num_examples": 2171090}], "download_size": 1801274080, "dataset_size": 3217788413}} | 2022-12-07T15:31:56+00:00 |
9bbf3992d70b4282f5f98b09b56bc43160fe45ed | # Dataset Card for "small-oscar-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ola13/small-oscar-dedup | [
"region:us"
]
| 2022-12-07T13:44:16+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "perplexity", "dtype": "float64"}, {"name": "dup_ratio", "dtype": "float64"}, {"name": "pairs", "sequence": {"sequence": "int64"}}, {"name": "repetitions", "sequence": "binary"}, {"name": "cluster", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 323557137, "num_examples": 43200}], "download_size": 0, "dataset_size": 323557137}} | 2022-12-07T15:48:57+00:00 |
870af1f0f8f400666648265478d86b6dd1524bbb | # Dataset Card for "small-oscar-repetitions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ola13/small-oscar-repetitions | [
"region:us"
]
| 2022-12-07T13:44:34+00:00 | {"dataset_info": {"features": [{"name": "repetition", "dtype": "binary"}, {"name": "ids", "sequence": "int64"}, {"name": "num_docs", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 55598378, "num_examples": 166667}], "download_size": 0, "dataset_size": 55598378}} | 2022-12-07T15:49:03+00:00 |
05c3d12fab6ce69c72e9288a0fee86c021e40f71 | # Dataset Card for "olm-october-2022-tokenized-1024-no-bigscience-filters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized-1024-no-bigscience-filters | [
"region:us"
]
| 2022-12-07T14:01:41+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 79176169656.0, "num_examples": 12861626}], "download_size": 21440888036, "dataset_size": 79176169656.0}} | 2022-12-07T14:49:33+00:00 |
7b405c185b5d8ce3bc0d74767e2c9107cc69a58a |
# CMeEE 中文医学命名实体识别数据集
字段说明
+ `text`: 文本
+ `entities`: 文本中包含的实体
+ `id`: 实体 `id`
+ `entity`: 实体对应的字符串
+ `start_offset`: 实体开始位置
+ `end_offset`: 实体结束位置的下一位
+ `label`: 实体对应的开始位置
| xusenlin/cmeee | [
"region:us"
]
| 2022-12-07T14:16:08+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entities", "list": [{"name": "id", "dtype": "int64"}, {"name": "entity", "dtype": "string"}, {"name": "start_offset", "dtype": "int64"}, {"name": "end_offset", "dtype": "int64"}, {"name": "label", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5289666, "num_examples": 15000}, {"name": "test", "num_bytes": 461472, "num_examples": 3000}, {"name": "validation", "num_bytes": 1752698, "num_examples": 5000}], "download_size": 3359069, "dataset_size": 7503836}} | 2022-12-07T14:24:00+00:00 |
0b87dc36a9332970a119ee6d719c0729b1ced900 | # 人民日报命名实体识别数据集
字段说明
+ `text`: 文本
+ `entities`: 文本中包含的实体
+ `id`: 实体 `id`
+ `entity`: 实体对应的字符串
+ `start_offset`: 实体开始位置
+ `end_offset`: 实体结束位置的下一位
+ `label`: 实体对应的开始位置
| xusenlin/people-daily-ner | [
"region:us"
]
| 2022-12-07T14:28:09+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entities", "list": [{"name": "id", "dtype": "int64"}, {"name": "entity", "dtype": "string"}, {"name": "start_offset", "dtype": "int64"}, {"name": "end_offset", "dtype": "int64"}, {"name": "label", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4564472, "num_examples": 20864}, {"name": "test", "num_bytes": 1025142, "num_examples": 4636}, {"name": "validation", "num_bytes": 510546, "num_examples": 2318}], "download_size": 3891711, "dataset_size": 6100160}} | 2022-12-07T14:31:42+00:00 |
2efa75f1087ac76dc82534a615a9467b0e9b9d8e | # DuIE 关系抽取数据集
字段说明
+ `text`: 文本
+ `spo_list`: 文本中包含的关系三元组
+ `subject`: 头实体(主语)
+ `subject_type`: 头实体(主语)的类型
+ `object`: 尾实体(主语)
+ `object_type`: 尾实体(主语)的类型
+ `predicate`: 关系
| xusenlin/duie | [
"region:us"
]
| 2022-12-07T14:41:25+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "spo_list", "list": [{"name": "predicate", "dtype": "string"}, {"name": "object_type", "dtype": "string"}, {"name": "subject_type", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "subject", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 51849478, "num_examples": 172983}, {"name": "validation", "num_bytes": 6512116, "num_examples": 21626}], "download_size": 32568292, "dataset_size": 58361594}} | 2022-12-07T14:49:54+00:00 |
f124af73444fb36f19a8c4cab2436c2079d61a94 | # Dataset Card for "banking_with_vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dvilasuero/banking_with_vectors | [
"region:us"
]
| 2022-12-07T14:52:52+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "activate_my_card", "1": "age_limit", "2": "apple_pay_or_google_pay", "3": "atm_support", "4": "automatic_top_up", "5": "balance_not_updated_after_bank_transfer", "6": "balance_not_updated_after_cheque_or_cash_deposit", "7": "beneficiary_not_allowed", "8": "cancel_transfer", "9": "card_about_to_expire", "10": "card_acceptance", "11": "card_arrival", "12": "card_delivery_estimate", "13": "card_linking", "14": "card_not_working", "15": "card_payment_fee_charged", "16": "card_payment_not_recognised", "17": "card_payment_wrong_exchange_rate", "18": "card_swallowed", "19": "cash_withdrawal_charge", "20": "cash_withdrawal_not_recognised", "21": "change_pin", "22": "compromised_card", "23": "contactless_not_working", "24": "country_support", "25": "declined_card_payment", "26": "declined_cash_withdrawal", "27": "declined_transfer", "28": "direct_debit_payment_not_recognised", "29": "disposable_card_limits", "30": "edit_personal_details", "31": "exchange_charge", "32": "exchange_rate", "33": "exchange_via_app", "34": "extra_charge_on_statement", "35": "failed_transfer", "36": "fiat_currency_support", "37": "get_disposable_virtual_card", "38": "get_physical_card", "39": "getting_spare_card", "40": "getting_virtual_card", "41": "lost_or_stolen_card", "42": "lost_or_stolen_phone", "43": "order_physical_card", "44": "passcode_forgotten", "45": "pending_card_payment", "46": "pending_cash_withdrawal", "47": "pending_top_up", "48": "pending_transfer", "49": "pin_blocked", "50": "receiving_money", "51": "Refund_not_showing_up", "52": "request_refund", "53": "reverted_card_payment?", "54": "supported_cards_and_currencies", "55": "terminate_account", "56": "top_up_by_bank_transfer_charge", "57": "top_up_by_card_charge", "58": "top_up_by_cash_or_cheque", "59": "top_up_failed", "60": "top_up_limits", "61": "top_up_reverted", "62": "topping_up_by_card", "63": "transaction_charged_twice", "64": "transfer_fee_charged", "65": "transfer_into_account", "66": "transfer_not_received_by_recipient", "67": "transfer_timing", "68": "unable_to_verify_identity", "69": "verify_my_identity", "70": "verify_source_of_funds", "71": "verify_top_up", "72": "virtual_card_not_working", "73": "visa_or_mastercard", "74": "why_verify_identity", "75": "wrong_amount_of_cash_received", "76": "wrong_exchange_rate_for_cash_withdrawal"}}}}], "splits": [{"name": "test", "num_bytes": 204010, "num_examples": 3080}], "download_size": 89116, "dataset_size": 204010}} | 2022-12-07T14:53:01+00:00 |
a37e2f0d19c749948fd7e0a090936832a3dae865 | # Dataset Card for "banking77_vectors"
Install `pip install fast-sentence-transformers`
```python
from fast_sentence_transformers import FastSentenceTransformer as SentenceTransformer
from datasets import load_dataset
# use any sentence-transformer
encoder = SentenceTransformer("all-MiniLM-L6-v2", device="cpu")
dataset = load_dataset("banking77", split="test")
dataset = dataset.map(lambda batch: {"vector": encoder.encode(batch["text"])}, batch_size=32, batched=True)
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dvilasuero/banking77_vectors | [
"region:us"
]
| 2022-12-07T14:54:00+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "activate_my_card", "1": "age_limit", "2": "apple_pay_or_google_pay", "3": "atm_support", "4": "automatic_top_up", "5": "balance_not_updated_after_bank_transfer", "6": "balance_not_updated_after_cheque_or_cash_deposit", "7": "beneficiary_not_allowed", "8": "cancel_transfer", "9": "card_about_to_expire", "10": "card_acceptance", "11": "card_arrival", "12": "card_delivery_estimate", "13": "card_linking", "14": "card_not_working", "15": "card_payment_fee_charged", "16": "card_payment_not_recognised", "17": "card_payment_wrong_exchange_rate", "18": "card_swallowed", "19": "cash_withdrawal_charge", "20": "cash_withdrawal_not_recognised", "21": "change_pin", "22": "compromised_card", "23": "contactless_not_working", "24": "country_support", "25": "declined_card_payment", "26": "declined_cash_withdrawal", "27": "declined_transfer", "28": "direct_debit_payment_not_recognised", "29": "disposable_card_limits", "30": "edit_personal_details", "31": "exchange_charge", "32": "exchange_rate", "33": "exchange_via_app", "34": "extra_charge_on_statement", "35": "failed_transfer", "36": "fiat_currency_support", "37": "get_disposable_virtual_card", "38": "get_physical_card", "39": "getting_spare_card", "40": "getting_virtual_card", "41": "lost_or_stolen_card", "42": "lost_or_stolen_phone", "43": "order_physical_card", "44": "passcode_forgotten", "45": "pending_card_payment", "46": "pending_cash_withdrawal", "47": "pending_top_up", "48": "pending_transfer", "49": "pin_blocked", "50": "receiving_money", "51": "Refund_not_showing_up", "52": "request_refund", "53": "reverted_card_payment?", "54": "supported_cards_and_currencies", "55": "terminate_account", "56": "top_up_by_bank_transfer_charge", "57": "top_up_by_card_charge", "58": "top_up_by_cash_or_cheque", "59": "top_up_failed", "60": "top_up_limits", "61": "top_up_reverted", "62": "topping_up_by_card", "63": "transaction_charged_twice", "64": "transfer_fee_charged", "65": "transfer_into_account", "66": "transfer_not_received_by_recipient", "67": "transfer_timing", "68": "unable_to_verify_identity", "69": "verify_my_identity", "70": "verify_source_of_funds", "71": "verify_top_up", "72": "virtual_card_not_working", "73": "visa_or_mastercard", "74": "why_verify_identity", "75": "wrong_amount_of_cash_received", "76": "wrong_exchange_rate_for_cash_withdrawal"}}}}, {"name": "vector", "sequence": "float32"}], "splits": [{"name": "test", "num_bytes": 4947210, "num_examples": 3080}], "download_size": 6749950, "dataset_size": 4947210}} | 2022-12-07T14:56:59+00:00 |
f2e1c4c7628e20d6919b9192a7cd8f576c3a0603 |
Each line includes one example, represented as a JSON object. The critical fields are:
sentence: The natural language sentence describing the pair of images for this example.\
left_url: The URL of the left image in the pair.\
right_url: The URL of the right image in the pair.\
label: The label: true or false.\
identifier: The unique identifier for the image, in the format: split-set_id-pair_id-sentence-id. split is the split of the data (train, test, or development). set_id is the unique identifier of the original eight-image set used in the sentence-writing task. pair_id indicates which of the pairs in the set it corresponds to (and is between 0 and 3). sentence-id indicates which of the sentences is associated with this pair (and is either 0 or 1 -- each image pair is associated with at most two sentences).
| HuggingFaceM4/NLVR2 | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-07T15:44:41+00:00 | {"license": "cc-by-4.0"} | 2022-12-21T15:45:19+00:00 |
50836aef3cb9e4b7a4b040ece9eabe1f71ca4f39 | # Dataset Card for "testcocotrade"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/testcocotrade | [
"region:us"
]
| 2022-12-07T16:02:52+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "list": [{"name": "category_id", "dtype": {"class_label": {"names": {"0": "Image", "1": "Main heading (CAPS)", "2": "Page header (TRADES)", "3": "Running heads", "4": "Section title", "5": "Text Box"}}}}, {"name": "image_id", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "segmentation", "list": {"list": "float32"}}, {"name": "iscrowd", "dtype": "bool"}, {"name": "ignore", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 11022954.0, "num_examples": 6}], "download_size": 10923350, "dataset_size": 11022954.0}} | 2022-12-07T16:03:13+00:00 |
ec1b0fb9b9b7741fb241f4379ff6d2be4d1fbf6c |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@parambharat](https://github.com/parambharat) for adding this dataset.
| parambharat/tamil_asr_corpus | [
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|common_voice",
"source_datasets:extended|openslr",
"language:ta",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-07T16:36:05+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ta"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|common_voice", "extended|openslr"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Tamil ASR Corpus", "tags": []} | 2022-12-07T17:32:59+00:00 |
20b5dcbdbc4776c1412131c0b06b319eac97ef8b | # Dataset Card for NKJP1M – The manually annotated subcorpus of the National Corpus of Polish
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NKJP1M](http://clip.ipipan.waw.pl/NationalCorpusOfPolish)
- **Repository:** [NKJP1M-SGJP](http://download.sgjp.pl/morfeusz/current/)
- **Paper:** [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
- **Point of Contact:** mailto:[email protected]
### Dataset Summary
This is the official dataset for NKJP1M – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)
Besides the text (divided into paragraphs/samples and sentences) the set contains lemmas and morpho-syntactic tags for all tokens in the corpus.
This release, known as NKJP1M-SGJP, corresponds to the version 1.2 of the corpus with later corrections and improvements. In particular the morpho-syntactic annotation has been aligned with the present version of Morfeusz2 SGJP morphological analyser (as of 2022.12.04).
### Supported Tasks and Leaderboards
The main use of this resource lays in training models for lemmatisation and part of speech tagging of Polish.
### Languages
Polish (monolingual)
## Dataset Structure
### Data Instances
```
{'nkjp_text': 'NKJP_1M_1102000002',
'nkjp_par': 'morph_1-p',
'nkjp_sent': 'morph_1.18-s',
'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
'nps': [False, False, False, False, True, False, False, False, False, True],
'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}
```
### Data Fields
- `nkjp_text`, `nkjp_par`, `nkjp_sent` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
- `tags` (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
- `poses` (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
- `cposes` (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a `V`, nominal – `N`, adjectival – `A`, “strange” (abbreviations, alien elements, symbols, emojis…) – `X`, rest as in `poses`.
- `nps` (sequence of booleans): `True` means that the corresponding token is not preceded by a space in the source text.
- `nkjp_ids` (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).
### Data Splits
| | Train | Validation | Test |
| ----- | ------ | ----- | ---- |
| sentences | 68943 | 7755 | 8964 |
| tokens | 978368 | 112454 | 125059 |
## Dataset Creation
### Curation Rationale
The National Corpus of Polish (NKJP) was envisioned as the reference corpus of contemporary Polish.
The manually annotated subcorpus (NKJP1M) was thought of as the training data for various NLP tasks.
### Source Data
NKJP is balanced with respect to Polish readership. The detailed rationale is described in Chapter 3 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf) (roughly: 50% press, 30% books, 10% speech, 10% other). The corpus contains texts from the years 1945–2010 (with 80% of the text in the range 1990–2010). Only original Polish texts were gathered (no translations from other languages). The composition of NKJP1M follows this schema (see Chapter 5).
### Annotations
The rules of morphosyntactic annotation used for NKJP are discussed in Chapter 6 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf). Presently (2020), the corpus uses a common tagset with the morphological analyzer [Morfeusz 2](http://morfeusz.sgjp.pl/).
#### Annotation process
The texts were processed with Morfeusz and then the resulting annotations were manually disambiguated and validated/corrected. Each text sample was independently processed by two annotators. In case of annotation conflicts an adjudicator stepped in.
### Licensing Information
 This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
Info on the source corpus: [link](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
```
@Book{nkjp:12,
editor = "Adam Przepiórkowski and Mirosław Bańko and Rafał
L. Górski and Barbara Lewandowska-Tomaszczyk",
title = "Narodowy Korpus Języka Polskiego",
year = 2012,
address = "Warszawa",
pdf = "http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf",
publisher = "Wydawnictwo Naukowe PWN"}
```
Current annotation scheme: [link](https://jezyk-polski.pl/index.php/jp/article/view/72)
```
@article{
kie:etal:21,
author = "Kieraś, Witold and Woliński, Marcin and Nitoń, Bartłomiej",
doi = "https://doi.org/10.31286/JP.101.2.5",
title = "Nowe wielowarstwowe znakowanie lingwistyczne zrównoważonego {N}arodowego {K}orpusu {J}ęzyka {P}olskiego",
url = "https://jezyk-polski.pl/index.php/jp/article/view/72",
journal = "Język Polski",
number = "2",
volume = "CI",
year = "2021",
pages = "59--70"
}
```
<!--
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
--> | ipipan/nkjp1m | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
"National Corpus of Polish",
"Narodowy Korpus Języka Polskiego",
"region:us"
]
| 2022-12-07T16:41:20+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["pl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["part-of-speech", "lemmatization"], "pretty_name": "NKJP1M", "tags": ["National Corpus of Polish", "Narodowy Korpus J\u0119zyka Polskiego"], "dataset_info": {"features": [{"name": "nkjp_text", "dtype": "string"}, {"name": "nkjp_par", "dtype": "string"}, {"name": "nkjp_sent", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "cposes", "sequence": {"class_label": {"names": {"0": "A", "1": "Adv", "2": "Comp", "3": "Conj", "4": "Dig", "5": "Interj", "6": "N", "7": "Num", "8": "Part", "9": "Prep", "10": "Punct", "11": "V", "12": "X"}}}}, {"name": "poses", "sequence": {"class_label": {"names": {"0": "adj", "1": "adja", "2": "adjc", "3": "adjp", "4": "adv", "5": "aglt", "6": "bedzie", "7": "brev", "8": "comp", "9": "conj", "10": "depr", "11": "dig", "12": "fin", "13": "frag", "14": "ger", "15": "imps", "16": "impt", "17": "inf", "18": "interj", "19": "interp", "20": "num", "21": "numcomp", "22": "pact", "23": "pacta", "24": "pant", "25": "part", "26": "pcon", "27": "ppas", "28": "ppron12", "29": "ppron3", "30": "praet", "31": "pred", "32": "prep", "33": "romandig", "34": "siebie", "35": "subst", "36": "sym", "37": "winien", "38": "xxs", "39": "xxx"}}}}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "adj:pl:acc:f:com", "1": "adj:pl:acc:f:pos", "2": "adj:pl:acc:f:sup", "3": "adj:pl:acc:m1:com", "4": "adj:pl:acc:m1:pos", "5": "adj:pl:acc:m1:sup", "6": "adj:pl:acc:m2:com", "7": "adj:pl:acc:m2:pos", "8": "adj:pl:acc:m2:sup", "9": "adj:pl:acc:m3:com", "10": "adj:pl:acc:m3:pos", "11": "adj:pl:acc:m3:sup", "12": "adj:pl:acc:n:com", "13": "adj:pl:acc:n:pos", "14": "adj:pl:acc:n:sup", "15": "adj:pl:dat:f:com", "16": "adj:pl:dat:f:pos", "17": "adj:pl:dat:f:sup", "18": "adj:pl:dat:m1:com", "19": "adj:pl:dat:m1:pos", "20": "adj:pl:dat:m1:sup", "21": "adj:pl:dat:m2:pos", "22": "adj:pl:dat:m3:com", "23": "adj:pl:dat:m3:pos", "24": "adj:pl:dat:n:pos", "25": "adj:pl:dat:n:sup", "26": "adj:pl:gen:f:com", "27": "adj:pl:gen:f:pos", "28": "adj:pl:gen:f:sup", "29": "adj:pl:gen:m1:com", "30": "adj:pl:gen:m1:pos", "31": "adj:pl:gen:m1:sup", "32": "adj:pl:gen:m2:com", "33": "adj:pl:gen:m2:pos", "34": "adj:pl:gen:m2:sup", "35": "adj:pl:gen:m3:com", "36": "adj:pl:gen:m3:pos", "37": "adj:pl:gen:m3:sup", "38": "adj:pl:gen:n:com", "39": "adj:pl:gen:n:pos", "40": "adj:pl:gen:n:sup", "41": "adj:pl:inst:f:com", "42": "adj:pl:inst:f:pos", "43": "adj:pl:inst:f:sup", "44": "adj:pl:inst:m1:com", "45": "adj:pl:inst:m1:pos", "46": "adj:pl:inst:m1:sup", "47": "adj:pl:inst:m2:pos", "48": "adj:pl:inst:m3:com", "49": "adj:pl:inst:m3:pos", "50": "adj:pl:inst:m3:sup", "51": "adj:pl:inst:n:com", "52": "adj:pl:inst:n:pos", "53": "adj:pl:inst:n:sup", "54": "adj:pl:loc:f:com", "55": "adj:pl:loc:f:pos", "56": "adj:pl:loc:f:sup", "57": "adj:pl:loc:m1:com", "58": "adj:pl:loc:m1:pos", "59": "adj:pl:loc:m1:sup", "60": "adj:pl:loc:m2:pos", "61": "adj:pl:loc:m3:com", "62": "adj:pl:loc:m3:pos", "63": "adj:pl:loc:m3:sup", "64": "adj:pl:loc:n:com", "65": "adj:pl:loc:n:pos", "66": "adj:pl:loc:n:sup", "67": "adj:pl:nom:f:com", "68": "adj:pl:nom:f:pos", "69": "adj:pl:nom:f:sup", "70": "adj:pl:nom:m1:com", "71": "adj:pl:nom:m1:pos", "72": "adj:pl:nom:m1:sup", "73": "adj:pl:nom:m2:com", "74": "adj:pl:nom:m2:pos", "75": "adj:pl:nom:m2:sup", "76": "adj:pl:nom:m3:com", "77": "adj:pl:nom:m3:pos", "78": "adj:pl:nom:m3:sup", "79": "adj:pl:nom:n:com", "80": "adj:pl:nom:n:pos", "81": "adj:pl:nom:n:sup", "82": "adj:pl:voc:f:pos", "83": "adj:pl:voc:m1:pos", "84": "adj:pl:voc:m2:pos", "85": "adj:pl:voc:n:pos", "86": "adj:sg:acc:f:com", "87": "adj:sg:acc:f:pos", "88": "adj:sg:acc:f:sup", "89": "adj:sg:acc:m1:com", "90": "adj:sg:acc:m1:pos", "91": "adj:sg:acc:m1:sup", "92": "adj:sg:acc:m2:com", "93": "adj:sg:acc:m2:pos", "94": "adj:sg:acc:m2:sup", "95": "adj:sg:acc:m3:com", "96": "adj:sg:acc:m3:pos", "97": "adj:sg:acc:m3:sup", "98": "adj:sg:acc:n:com", "99": "adj:sg:acc:n:pos", "100": "adj:sg:acc:n:sup", "101": "adj:sg:dat:f:com", "102": "adj:sg:dat:f:pos", "103": "adj:sg:dat:f:sup", "104": "adj:sg:dat:m1:com", "105": "adj:sg:dat:m1:pos", "106": "adj:sg:dat:m1:sup", "107": "adj:sg:dat:m2:pos", "108": "adj:sg:dat:m3:com", "109": "adj:sg:dat:m3:pos", "110": "adj:sg:dat:m3:sup", "111": "adj:sg:dat:n:com", "112": "adj:sg:dat:n:pos", "113": "adj:sg:dat:n:sup", "114": "adj:sg:gen:f:com", "115": "adj:sg:gen:f:pos", "116": "adj:sg:gen:f:sup", "117": "adj:sg:gen:m1:com", "118": "adj:sg:gen:m1:pos", "119": "adj:sg:gen:m1:sup", "120": "adj:sg:gen:m2:pos", "121": "adj:sg:gen:m2:sup", "122": "adj:sg:gen:m3:com", "123": "adj:sg:gen:m3:pos", "124": "adj:sg:gen:m3:sup", "125": "adj:sg:gen:n:com", "126": "adj:sg:gen:n:pos", "127": "adj:sg:gen:n:sup", "128": "adj:sg:inst:f:com", "129": "adj:sg:inst:f:pos", "130": "adj:sg:inst:f:sup", "131": "adj:sg:inst:m1:com", "132": "adj:sg:inst:m1:pos", "133": "adj:sg:inst:m1:sup", "134": "adj:sg:inst:m2:com", "135": "adj:sg:inst:m2:pos", "136": "adj:sg:inst:m2:sup", "137": "adj:sg:inst:m3:com", "138": "adj:sg:inst:m3:pos", "139": "adj:sg:inst:m3:sup", "140": "adj:sg:inst:n:com", "141": "adj:sg:inst:n:pos", "142": "adj:sg:inst:n:sup", "143": "adj:sg:loc:f:com", "144": "adj:sg:loc:f:pos", "145": "adj:sg:loc:f:sup", "146": "adj:sg:loc:m1:com", "147": "adj:sg:loc:m1:pos", "148": "adj:sg:loc:m1:sup", "149": "adj:sg:loc:m2:com", "150": "adj:sg:loc:m2:pos", "151": "adj:sg:loc:m3:com", "152": "adj:sg:loc:m3:pos", "153": "adj:sg:loc:m3:sup", "154": "adj:sg:loc:n:com", "155": "adj:sg:loc:n:pos", "156": "adj:sg:loc:n:sup", "157": "adj:sg:nom:f:com", "158": "adj:sg:nom:f:pos", "159": "adj:sg:nom:f:sup", "160": "adj:sg:nom:m1:com", "161": "adj:sg:nom:m1:pos", "162": "adj:sg:nom:m1:sup", "163": "adj:sg:nom:m2:com", "164": "adj:sg:nom:m2:pos", "165": "adj:sg:nom:m2:sup", "166": "adj:sg:nom:m3:com", "167": "adj:sg:nom:m3:pos", "168": "adj:sg:nom:m3:sup", "169": "adj:sg:nom:n:com", "170": "adj:sg:nom:n:pos", "171": "adj:sg:nom:n:sup", "172": "adj:sg:voc:f:pos", "173": "adj:sg:voc:f:sup", "174": "adj:sg:voc:m1:pos", "175": "adj:sg:voc:m1:sup", "176": "adj:sg:voc:m2:pos", "177": "adj:sg:voc:m3:pos", "178": "adj:sg:voc:n:pos", "179": "adja", "180": "adjc", "181": "adjp:dat", "182": "adjp:gen", "183": "adv", "184": "adv:com", "185": "adv:pos", "186": "adv:sup", "187": "aglt:pl:pri:imperf:nwok", "188": "aglt:pl:sec:imperf:nwok", "189": "aglt:sg:pri:imperf:nwok", "190": "aglt:sg:pri:imperf:wok", "191": "aglt:sg:sec:imperf:nwok", "192": "aglt:sg:sec:imperf:wok", "193": "bedzie:pl:pri:imperf", "194": "bedzie:pl:sec:imperf", "195": "bedzie:pl:ter:imperf", "196": "bedzie:sg:pri:imperf", "197": "bedzie:sg:sec:imperf", "198": "bedzie:sg:ter:imperf", "199": "brev:npun", "200": "brev:pun", "201": "comp", "202": "conj", "203": "depr:pl:acc:m2", "204": "depr:pl:nom:m2", "205": "depr:pl:voc:m2", "206": "dig", "207": "fin:pl:pri:imperf", "208": "fin:pl:pri:perf", "209": "fin:pl:sec:imperf", "210": "fin:pl:sec:perf", "211": "fin:pl:ter:imperf", "212": "fin:pl:ter:perf", "213": "fin:sg:pri:imperf", "214": "fin:sg:pri:perf", "215": "fin:sg:sec:imperf", "216": "fin:sg:sec:perf", "217": "fin:sg:ter:imperf", "218": "fin:sg:ter:perf", "219": "frag", "220": "ger:pl:acc:n:imperf:aff", "221": "ger:pl:acc:n:perf:aff", "222": "ger:pl:dat:n:perf:aff", "223": "ger:pl:gen:n:imperf:aff", "224": "ger:pl:gen:n:perf:aff", "225": "ger:pl:inst:n:imperf:aff", "226": "ger:pl:inst:n:perf:aff", "227": "ger:pl:loc:n:imperf:aff", "228": "ger:pl:loc:n:perf:aff", "229": "ger:pl:nom:n:imperf:aff", "230": "ger:pl:nom:n:perf:aff", "231": "ger:sg:acc:n:imperf:aff", "232": "ger:sg:acc:n:imperf:neg", "233": "ger:sg:acc:n:perf:aff", "234": "ger:sg:acc:n:perf:neg", "235": "ger:sg:dat:n:imperf:aff", "236": "ger:sg:dat:n:perf:aff", "237": "ger:sg:dat:n:perf:neg", "238": "ger:sg:gen:n:imperf:aff", "239": "ger:sg:gen:n:imperf:neg", "240": "ger:sg:gen:n:perf:aff", "241": "ger:sg:gen:n:perf:neg", "242": "ger:sg:inst:n:imperf:aff", "243": "ger:sg:inst:n:imperf:neg", "244": "ger:sg:inst:n:perf:aff", "245": "ger:sg:inst:n:perf:neg", "246": "ger:sg:loc:n:imperf:aff", "247": "ger:sg:loc:n:imperf:neg", "248": "ger:sg:loc:n:perf:aff", "249": "ger:sg:loc:n:perf:neg", "250": "ger:sg:nom:n:imperf:aff", "251": "ger:sg:nom:n:imperf:neg", "252": "ger:sg:nom:n:perf:aff", "253": "ger:sg:nom:n:perf:neg", "254": "imps:imperf", "255": "imps:perf", "256": "impt:pl:pri:imperf", "257": "impt:pl:pri:perf", "258": "impt:pl:sec:imperf", "259": "impt:pl:sec:perf", "260": "impt:sg:pri:imperf", "261": "impt:sg:sec:imperf", "262": "impt:sg:sec:perf", "263": "inf:imperf", "264": "inf:perf", "265": "interj", "266": "interp", "267": "num:pl:acc:f:congr:ncol", "268": "num:pl:acc:f:rec", "269": "num:pl:acc:f:rec:ncol", "270": "num:pl:acc:m1:rec", "271": "num:pl:acc:m1:rec:col", "272": "num:pl:acc:m1:rec:ncol", "273": "num:pl:acc:m2:congr:ncol", "274": "num:pl:acc:m2:rec", "275": "num:pl:acc:m2:rec:ncol", "276": "num:pl:acc:m3:congr", "277": "num:pl:acc:m3:congr:ncol", "278": "num:pl:acc:m3:rec", "279": "num:pl:acc:m3:rec:ncol", "280": "num:pl:acc:n:congr:ncol", "281": "num:pl:acc:n:rec", "282": "num:pl:acc:n:rec:col", "283": "num:pl:acc:n:rec:ncol", "284": "num:pl:dat:f:congr", "285": "num:pl:dat:f:congr:ncol", "286": "num:pl:dat:m1:congr", "287": "num:pl:dat:m1:congr:col", "288": "num:pl:dat:m1:congr:ncol", "289": "num:pl:dat:m2:congr", "290": "num:pl:dat:m3:congr:ncol", "291": "num:pl:dat:n:congr", "292": "num:pl:dat:n:congr:ncol", "293": "num:pl:gen:f:congr", "294": "num:pl:gen:f:congr:ncol", "295": "num:pl:gen:f:rec", "296": "num:pl:gen:f:rec:ncol", "297": "num:pl:gen:m1:congr", "298": "num:pl:gen:m1:congr:ncol", "299": "num:pl:gen:m1:rec", "300": "num:pl:gen:m1:rec:col", "301": "num:pl:gen:m2:congr", "302": "num:pl:gen:m2:congr:ncol", "303": "num:pl:gen:m2:rec", "304": "num:pl:gen:m3:congr", "305": "num:pl:gen:m3:congr:ncol", "306": "num:pl:gen:m3:rec", "307": "num:pl:gen:m3:rec:ncol", "308": "num:pl:gen:n:congr", "309": "num:pl:gen:n:congr:ncol", "310": "num:pl:gen:n:rec", "311": "num:pl:gen:n:rec:col", "312": "num:pl:inst:f:congr", "313": "num:pl:inst:f:congr:ncol", "314": "num:pl:inst:m1:congr", "315": "num:pl:inst:m1:congr:ncol", "316": "num:pl:inst:m1:rec:col", "317": "num:pl:inst:m2:congr", "318": "num:pl:inst:m2:congr:ncol", "319": "num:pl:inst:m3:congr", "320": "num:pl:inst:m3:congr:ncol", "321": "num:pl:inst:n:congr", "322": "num:pl:inst:n:congr:ncol", "323": "num:pl:inst:n:rec:col", "324": "num:pl:loc:f:congr", "325": "num:pl:loc:f:congr:ncol", "326": "num:pl:loc:m1:congr", "327": "num:pl:loc:m1:congr:ncol", "328": "num:pl:loc:m2:congr", "329": "num:pl:loc:m2:congr:ncol", "330": "num:pl:loc:m3:congr", "331": "num:pl:loc:m3:congr:ncol", "332": "num:pl:loc:n:congr", "333": "num:pl:loc:n:congr:ncol", "334": "num:pl:nom:f:congr:ncol", "335": "num:pl:nom:f:rec", "336": "num:pl:nom:f:rec:ncol", "337": "num:pl:nom:m1:congr:ncol", "338": "num:pl:nom:m1:rec", "339": "num:pl:nom:m1:rec:col", "340": "num:pl:nom:m1:rec:ncol", "341": "num:pl:nom:m2:congr:ncol", "342": "num:pl:nom:m2:rec", "343": "num:pl:nom:m2:rec:ncol", "344": "num:pl:nom:m3:congr:ncol", "345": "num:pl:nom:m3:rec", "346": "num:pl:nom:m3:rec:ncol", "347": "num:pl:nom:n:congr", "348": "num:pl:nom:n:congr:ncol", "349": "num:pl:nom:n:rec", "350": "num:pl:nom:n:rec:col", "351": "num:pl:nom:n:rec:ncol", "352": "num:sg:acc:f:rec", "353": "num:sg:acc:f:rec:ncol", "354": "num:sg:acc:m1:rec:ncol", "355": "num:sg:acc:m2:rec", "356": "num:sg:acc:m3:rec", "357": "num:sg:acc:m3:rec:ncol", "358": "num:sg:acc:n:rec", "359": "num:sg:gen:f:rec", "360": "num:sg:gen:m3:rec", "361": "num:sg:gen:n:rec", "362": "num:sg:inst:m3:rec", "363": "num:sg:loc:f:rec", "364": "num:sg:loc:m3:congr", "365": "num:sg:loc:m3:rec", "366": "num:sg:nom:f:rec", "367": "num:sg:nom:m2:rec", "368": "num:sg:nom:m3:rec", "369": "num:sg:nom:m3:rec:ncol", "370": "num:sg:nom:n:rec", "371": "numcomp", "372": "pact:pl:acc:f:imperf:aff", "373": "pact:pl:acc:f:imperf:neg", "374": "pact:pl:acc:m1:imperf:aff", "375": "pact:pl:acc:m2:imperf:aff", "376": "pact:pl:acc:m3:imperf:aff", "377": "pact:pl:acc:m3:imperf:neg", "378": "pact:pl:acc:n:imperf:aff", "379": "pact:pl:acc:n:imperf:neg", "380": "pact:pl:dat:f:imperf:aff", "381": "pact:pl:dat:m1:imperf:aff", "382": "pact:pl:dat:m2:imperf:aff", "383": "pact:pl:dat:m3:imperf:aff", "384": "pact:pl:dat:n:imperf:aff", "385": "pact:pl:gen:f:imperf:aff", "386": "pact:pl:gen:f:imperf:neg", "387": "pact:pl:gen:m1:imperf:aff", "388": "pact:pl:gen:m1:imperf:neg", "389": "pact:pl:gen:m2:imperf:aff", "390": "pact:pl:gen:m3:imperf:aff", "391": "pact:pl:gen:m3:imperf:neg", "392": "pact:pl:gen:n:imperf:aff", "393": "pact:pl:inst:f:imperf:aff", "394": "pact:pl:inst:m1:imperf:aff", "395": "pact:pl:inst:m2:imperf:aff", "396": "pact:pl:inst:m3:imperf:aff", "397": "pact:pl:inst:m3:imperf:neg", "398": "pact:pl:inst:n:imperf:aff", "399": "pact:pl:inst:n:imperf:neg", "400": "pact:pl:loc:f:imperf:aff", "401": "pact:pl:loc:m1:imperf:aff", "402": "pact:pl:loc:m3:imperf:aff", "403": "pact:pl:loc:m3:imperf:neg", "404": "pact:pl:loc:n:imperf:aff", "405": "pact:pl:loc:n:imperf:neg", "406": "pact:pl:nom:f:imperf:aff", "407": "pact:pl:nom:f:imperf:neg", "408": "pact:pl:nom:m1:imperf:aff", "409": "pact:pl:nom:m2:imperf:aff", "410": "pact:pl:nom:m3:imperf:aff", "411": "pact:pl:nom:n:imperf:aff", "412": "pact:pl:nom:n:imperf:neg", "413": "pact:pl:voc:f:imperf:aff", "414": "pact:sg:acc:f:imperf:aff", "415": "pact:sg:acc:f:imperf:neg", "416": "pact:sg:acc:m1:imperf:aff", "417": "pact:sg:acc:m2:imperf:aff", "418": "pact:sg:acc:m3:imperf:aff", "419": "pact:sg:acc:n:imperf:aff", "420": "pact:sg:acc:n:imperf:neg", "421": "pact:sg:dat:f:imperf:aff", "422": "pact:sg:dat:m1:imperf:aff", "423": "pact:sg:dat:m2:imperf:aff", "424": "pact:sg:dat:m3:imperf:aff", "425": "pact:sg:dat:n:imperf:aff", "426": "pact:sg:gen:f:imperf:aff", "427": "pact:sg:gen:f:imperf:neg", "428": "pact:sg:gen:m1:imperf:aff", "429": "pact:sg:gen:m1:imperf:neg", "430": "pact:sg:gen:m2:imperf:aff", "431": "pact:sg:gen:m3:imperf:aff", "432": "pact:sg:gen:m3:imperf:neg", "433": "pact:sg:gen:n:imperf:aff", "434": "pact:sg:gen:n:imperf:neg", "435": "pact:sg:inst:f:imperf:aff", "436": "pact:sg:inst:f:imperf:neg", "437": "pact:sg:inst:m1:imperf:aff", "438": "pact:sg:inst:m1:imperf:neg", "439": "pact:sg:inst:m2:imperf:aff", "440": "pact:sg:inst:m2:imperf:neg", "441": "pact:sg:inst:m3:imperf:aff", "442": "pact:sg:inst:m3:imperf:neg", "443": "pact:sg:inst:n:imperf:aff", "444": "pact:sg:loc:f:imperf:aff", "445": "pact:sg:loc:f:imperf:neg", "446": "pact:sg:loc:m1:imperf:aff", "447": "pact:sg:loc:m2:imperf:aff", "448": "pact:sg:loc:m3:imperf:aff", "449": "pact:sg:loc:m3:imperf:neg", "450": "pact:sg:loc:n:imperf:aff", "451": "pact:sg:loc:n:imperf:neg", "452": "pact:sg:nom:f:imperf:aff", "453": "pact:sg:nom:f:imperf:neg", "454": "pact:sg:nom:m1:imperf:aff", "455": "pact:sg:nom:m1:imperf:neg", "456": "pact:sg:nom:m2:imperf:aff", "457": "pact:sg:nom:m3:imperf:aff", "458": "pact:sg:nom:m3:imperf:neg", "459": "pact:sg:nom:n:imperf:aff", "460": "pact:sg:nom:n:imperf:neg", "461": "pact:sg:voc:m1:imperf:aff", "462": "pacta", "463": "pant:perf", "464": "part", "465": "part:nwok", "466": "part:wok", "467": "pcon:imperf", "468": "ppas:pl:acc:f:imperf:aff", "469": "ppas:pl:acc:f:perf:aff", "470": "ppas:pl:acc:f:perf:neg", "471": "ppas:pl:acc:m1:imperf:aff", "472": "ppas:pl:acc:m1:imperf:neg", "473": "ppas:pl:acc:m1:perf:aff", "474": "ppas:pl:acc:m1:perf:neg", "475": "ppas:pl:acc:m2:imperf:aff", "476": "ppas:pl:acc:m2:perf:aff", "477": "ppas:pl:acc:m3:imperf:aff", "478": "ppas:pl:acc:m3:perf:aff", "479": "ppas:pl:acc:m3:perf:neg", "480": "ppas:pl:acc:n:imperf:aff", "481": "ppas:pl:acc:n:imperf:neg", "482": "ppas:pl:acc:n:perf:aff", "483": "ppas:pl:acc:n:perf:neg", "484": "ppas:pl:dat:f:imperf:aff", "485": "ppas:pl:dat:f:perf:aff", "486": "ppas:pl:dat:f:perf:neg", "487": "ppas:pl:dat:m1:imperf:aff", "488": "ppas:pl:dat:m1:perf:aff", "489": "ppas:pl:dat:m1:perf:neg", "490": "ppas:pl:dat:m2:imperf:aff", "491": "ppas:pl:dat:m3:imperf:aff", "492": "ppas:pl:dat:m3:perf:aff", "493": "ppas:pl:dat:n:imperf:aff", "494": "ppas:pl:dat:n:perf:aff", "495": "ppas:pl:gen:f:imperf:aff", "496": "ppas:pl:gen:f:imperf:neg", "497": "ppas:pl:gen:f:perf:aff", "498": "ppas:pl:gen:f:perf:neg", "499": "ppas:pl:gen:m1:imperf:aff", "500": "ppas:pl:gen:m1:imperf:neg", "501": "ppas:pl:gen:m1:perf:aff", "502": "ppas:pl:gen:m1:perf:neg", "503": "ppas:pl:gen:m2:imperf:aff", "504": "ppas:pl:gen:m2:perf:aff", "505": "ppas:pl:gen:m3:imperf:aff", "506": "ppas:pl:gen:m3:imperf:neg", "507": "ppas:pl:gen:m3:perf:aff", "508": "ppas:pl:gen:m3:perf:neg", "509": "ppas:pl:gen:n:imperf:aff", "510": "ppas:pl:gen:n:perf:aff", "511": "ppas:pl:gen:n:perf:neg", "512": "ppas:pl:inst:f:imperf:aff", "513": "ppas:pl:inst:f:perf:aff", "514": "ppas:pl:inst:m1:imperf:aff", "515": "ppas:pl:inst:m1:perf:aff", "516": "ppas:pl:inst:m2:perf:aff", "517": "ppas:pl:inst:m3:imperf:aff", "518": "ppas:pl:inst:m3:perf:aff", "519": "ppas:pl:inst:n:imperf:aff", "520": "ppas:pl:inst:n:perf:aff", "521": "ppas:pl:loc:f:imperf:aff", "522": "ppas:pl:loc:f:imperf:neg", "523": "ppas:pl:loc:f:perf:aff", "524": "ppas:pl:loc:f:perf:neg", "525": "ppas:pl:loc:m1:imperf:aff", "526": "ppas:pl:loc:m1:perf:aff", "527": "ppas:pl:loc:m2:imperf:aff", "528": "ppas:pl:loc:m3:imperf:aff", "529": "ppas:pl:loc:m3:perf:aff", "530": "ppas:pl:loc:m3:perf:neg", "531": "ppas:pl:loc:n:imperf:aff", "532": "ppas:pl:loc:n:perf:aff", "533": "ppas:pl:loc:n:perf:neg", "534": "ppas:pl:nom:f:imperf:aff", "535": "ppas:pl:nom:f:imperf:neg", "536": "ppas:pl:nom:f:perf:aff", "537": "ppas:pl:nom:f:perf:neg", "538": "ppas:pl:nom:m1:imperf:aff", "539": "ppas:pl:nom:m1:imperf:neg", "540": "ppas:pl:nom:m1:perf:aff", "541": "ppas:pl:nom:m1:perf:neg", "542": "ppas:pl:nom:m2:imperf:aff", "543": "ppas:pl:nom:m2:perf:aff", "544": "ppas:pl:nom:m3:imperf:aff", "545": "ppas:pl:nom:m3:imperf:neg", "546": "ppas:pl:nom:m3:perf:aff", "547": "ppas:pl:nom:m3:perf:neg", "548": "ppas:pl:nom:n:imperf:aff", "549": "ppas:pl:nom:n:perf:aff", "550": "ppas:pl:nom:n:perf:neg", "551": "ppas:pl:voc:f:imperf:aff", "552": "ppas:sg:acc:f:imperf:aff", "553": "ppas:sg:acc:f:imperf:neg", "554": "ppas:sg:acc:f:perf:aff", "555": "ppas:sg:acc:f:perf:neg", "556": "ppas:sg:acc:m1:imperf:aff", "557": "ppas:sg:acc:m1:perf:aff", "558": "ppas:sg:acc:m2:imperf:aff", "559": "ppas:sg:acc:m2:perf:aff", "560": "ppas:sg:acc:m3:imperf:aff", "561": "ppas:sg:acc:m3:imperf:neg", "562": "ppas:sg:acc:m3:perf:aff", "563": "ppas:sg:acc:m3:perf:neg", "564": "ppas:sg:acc:n:imperf:aff", "565": "ppas:sg:acc:n:perf:aff", "566": "ppas:sg:acc:n:perf:neg", "567": "ppas:sg:dat:f:imperf:aff", "568": "ppas:sg:dat:f:imperf:neg", "569": "ppas:sg:dat:f:perf:aff", "570": "ppas:sg:dat:f:perf:neg", "571": "ppas:sg:dat:m1:imperf:aff", "572": "ppas:sg:dat:m1:perf:aff", "573": "ppas:sg:dat:m2:perf:aff", "574": "ppas:sg:dat:m3:imperf:aff", "575": "ppas:sg:dat:m3:perf:aff", "576": "ppas:sg:dat:n:perf:aff", "577": "ppas:sg:gen:f:imperf:aff", "578": "ppas:sg:gen:f:imperf:neg", "579": "ppas:sg:gen:f:perf:aff", "580": "ppas:sg:gen:f:perf:neg", "581": "ppas:sg:gen:m1:imperf:aff", "582": "ppas:sg:gen:m1:perf:aff", "583": "ppas:sg:gen:m1:perf:neg", "584": "ppas:sg:gen:m2:imperf:aff", "585": "ppas:sg:gen:m2:perf:aff", "586": "ppas:sg:gen:m3:imperf:aff", "587": "ppas:sg:gen:m3:imperf:neg", "588": "ppas:sg:gen:m3:perf:aff", "589": "ppas:sg:gen:m3:perf:neg", "590": "ppas:sg:gen:n:imperf:aff", "591": "ppas:sg:gen:n:imperf:neg", "592": "ppas:sg:gen:n:perf:aff", "593": "ppas:sg:gen:n:perf:neg", "594": "ppas:sg:inst:f:imperf:aff", "595": "ppas:sg:inst:f:imperf:neg", "596": "ppas:sg:inst:f:perf:aff", "597": "ppas:sg:inst:f:perf:neg", "598": "ppas:sg:inst:m1:imperf:aff", "599": "ppas:sg:inst:m1:imperf:neg", "600": "ppas:sg:inst:m1:perf:aff", "601": "ppas:sg:inst:m1:perf:neg", "602": "ppas:sg:inst:m2:imperf:aff", "603": "ppas:sg:inst:m2:perf:aff", "604": "ppas:sg:inst:m3:imperf:aff", "605": "ppas:sg:inst:m3:imperf:neg", "606": "ppas:sg:inst:m3:perf:aff", "607": "ppas:sg:inst:m3:perf:neg", "608": "ppas:sg:inst:n:imperf:aff", "609": "ppas:sg:inst:n:imperf:neg", "610": "ppas:sg:inst:n:perf:aff", "611": "ppas:sg:inst:n:perf:neg", "612": "ppas:sg:loc:f:imperf:aff", "613": "ppas:sg:loc:f:perf:aff", "614": "ppas:sg:loc:f:perf:neg", "615": "ppas:sg:loc:m1:imperf:aff", "616": "ppas:sg:loc:m1:perf:aff", "617": "ppas:sg:loc:m2:imperf:aff", "618": "ppas:sg:loc:m3:imperf:aff", "619": "ppas:sg:loc:m3:imperf:neg", "620": "ppas:sg:loc:m3:perf:aff", "621": "ppas:sg:loc:m3:perf:neg", "622": "ppas:sg:loc:n:imperf:aff", "623": "ppas:sg:loc:n:perf:aff", "624": "ppas:sg:loc:n:perf:neg", "625": "ppas:sg:nom:f:imperf:aff", "626": "ppas:sg:nom:f:imperf:neg", "627": "ppas:sg:nom:f:perf:aff", "628": "ppas:sg:nom:f:perf:neg", "629": "ppas:sg:nom:m1:imperf:aff", "630": "ppas:sg:nom:m1:imperf:neg", "631": "ppas:sg:nom:m1:perf:aff", "632": "ppas:sg:nom:m1:perf:neg", "633": "ppas:sg:nom:m2:imperf:aff", "634": "ppas:sg:nom:m2:perf:aff", "635": "ppas:sg:nom:m3:imperf:aff", "636": "ppas:sg:nom:m3:imperf:neg", "637": "ppas:sg:nom:m3:perf:aff", "638": "ppas:sg:nom:m3:perf:neg", "639": "ppas:sg:nom:n:imperf:aff", "640": "ppas:sg:nom:n:imperf:neg", "641": "ppas:sg:nom:n:perf:aff", "642": "ppas:sg:nom:n:perf:neg", "643": "ppas:sg:voc:m1:perf:aff", "644": "ppas:sg:voc:m2:imperf:aff", "645": "ppas:sg:voc:m3:perf:aff", "646": "ppron12:pl:acc:f:pri", "647": "ppron12:pl:acc:f:sec", "648": "ppron12:pl:acc:m1:pri", "649": "ppron12:pl:acc:m1:sec", "650": "ppron12:pl:acc:m2:sec", "651": "ppron12:pl:acc:n:sec", "652": "ppron12:pl:dat:f:pri", "653": "ppron12:pl:dat:f:sec", "654": "ppron12:pl:dat:m1:pri", "655": "ppron12:pl:dat:m1:sec", "656": "ppron12:pl:dat:m3:sec", "657": "ppron12:pl:gen:f:pri", "658": "ppron12:pl:gen:f:sec", "659": "ppron12:pl:gen:m1:pri", "660": "ppron12:pl:gen:m1:sec", "661": "ppron12:pl:gen:m2:pri", "662": "ppron12:pl:inst:f:pri", "663": "ppron12:pl:inst:m1:pri", "664": "ppron12:pl:inst:m1:sec", "665": "ppron12:pl:inst:n:pri", "666": "ppron12:pl:loc:f:sec", "667": "ppron12:pl:loc:m1:pri", "668": "ppron12:pl:loc:m1:sec", "669": "ppron12:pl:loc:m3:sec", "670": "ppron12:pl:nom:f:pri", "671": "ppron12:pl:nom:f:sec", "672": "ppron12:pl:nom:m1:pri", "673": "ppron12:pl:nom:m1:sec", "674": "ppron12:pl:nom:m2:pri", "675": "ppron12:pl:nom:n:sec", "676": "ppron12:pl:voc:m1:sec", "677": "ppron12:pl:voc:m2:sec", "678": "ppron12:sg:acc:f:pri:akc", "679": "ppron12:sg:acc:f:sec:akc", "680": "ppron12:sg:acc:f:sec:nakc", "681": "ppron12:sg:acc:m1:pri:akc", "682": "ppron12:sg:acc:m1:pri:nakc", "683": "ppron12:sg:acc:m1:sec:akc", "684": "ppron12:sg:acc:m1:sec:nakc", "685": "ppron12:sg:acc:m2:pri:akc", "686": "ppron12:sg:acc:m2:sec:nakc", "687": "ppron12:sg:acc:m3:pri:akc", "688": "ppron12:sg:acc:m3:sec:nakc", "689": "ppron12:sg:acc:n:pri:akc", "690": "ppron12:sg:acc:n:sec:nakc", "691": "ppron12:sg:dat:f:pri:akc", "692": "ppron12:sg:dat:f:pri:nakc", "693": "ppron12:sg:dat:f:sec:akc", "694": "ppron12:sg:dat:f:sec:nakc", "695": "ppron12:sg:dat:m1:pri:akc", "696": "ppron12:sg:dat:m1:pri:nakc", "697": "ppron12:sg:dat:m1:sec:akc", "698": "ppron12:sg:dat:m1:sec:nakc", "699": "ppron12:sg:dat:m2:pri:nakc", "700": "ppron12:sg:dat:m2:sec:akc", "701": "ppron12:sg:dat:m2:sec:nakc", "702": "ppron12:sg:gen:f:pri:akc", "703": "ppron12:sg:gen:f:sec:akc", "704": "ppron12:sg:gen:f:sec:nakc", "705": "ppron12:sg:gen:m1:pri:akc", "706": "ppron12:sg:gen:m1:sec:akc", "707": "ppron12:sg:gen:m1:sec:nakc", "708": "ppron12:sg:gen:m2:pri:akc", "709": "ppron12:sg:gen:m2:sec:akc", "710": "ppron12:sg:gen:m2:sec:nakc", "711": "ppron12:sg:gen:n:pri:akc", "712": "ppron12:sg:inst:f:pri", "713": "ppron12:sg:inst:f:sec", "714": "ppron12:sg:inst:m1:pri", "715": "ppron12:sg:inst:m1:pri:nakc", "716": "ppron12:sg:inst:m1:sec", "717": "ppron12:sg:inst:n:sec", "718": "ppron12:sg:loc:f:pri", "719": "ppron12:sg:loc:f:sec", "720": "ppron12:sg:loc:m1:pri", "721": "ppron12:sg:loc:m1:sec", "722": "ppron12:sg:loc:m3:pri", "723": "ppron12:sg:nom:f:pri", "724": "ppron12:sg:nom:f:sec", "725": "ppron12:sg:nom:m1:pri", "726": "ppron12:sg:nom:m1:pri:nakc", "727": "ppron12:sg:nom:m1:sec", "728": "ppron12:sg:nom:m2:pri", "729": "ppron12:sg:nom:m2:sec", "730": "ppron12:sg:nom:m3:pri", "731": "ppron12:sg:nom:m3:sec", "732": "ppron12:sg:nom:n:sec", "733": "ppron12:sg:voc:f:sec", "734": "ppron12:sg:voc:m1:sec", "735": "ppron12:sg:voc:m2:sec", "736": "ppron12:sg:voc:n:sec", "737": "ppron3:pl:acc:f:ter:akc:npraep", "738": "ppron3:pl:acc:f:ter:akc:praep", "739": "ppron3:pl:acc:m1:ter:akc:npraep", "740": "ppron3:pl:acc:m1:ter:akc:praep", "741": "ppron3:pl:acc:m2:ter:akc:npraep", "742": "ppron3:pl:acc:m2:ter:akc:praep", "743": "ppron3:pl:acc:m3:ter:akc:npraep", "744": "ppron3:pl:acc:m3:ter:akc:praep", "745": "ppron3:pl:acc:n:ter:akc:npraep", "746": "ppron3:pl:acc:n:ter:akc:praep", "747": "ppron3:pl:dat:f:ter:akc:npraep", "748": "ppron3:pl:dat:f:ter:akc:praep", "749": "ppron3:pl:dat:m1:ter:akc:npraep", "750": "ppron3:pl:dat:m1:ter:akc:praep", "751": "ppron3:pl:dat:m2:ter:akc:npraep", "752": "ppron3:pl:dat:m3:ter:akc:npraep", "753": "ppron3:pl:dat:m3:ter:akc:praep", "754": "ppron3:pl:dat:n:ter:akc:npraep", "755": "ppron3:pl:gen:f:ter:akc:npraep", "756": "ppron3:pl:gen:f:ter:akc:praep", "757": "ppron3:pl:gen:m1:ter:akc:npraep", "758": "ppron3:pl:gen:m1:ter:akc:praep", "759": "ppron3:pl:gen:m2:ter:akc:npraep", "760": "ppron3:pl:gen:m2:ter:akc:praep", "761": "ppron3:pl:gen:m3:ter:akc:npraep", "762": "ppron3:pl:gen:m3:ter:akc:praep", "763": "ppron3:pl:gen:n:ter:akc:npraep", "764": "ppron3:pl:gen:n:ter:akc:praep", "765": "ppron3:pl:inst:f:ter:akc:npraep", "766": "ppron3:pl:inst:f:ter:akc:praep", "767": "ppron3:pl:inst:m1:ter:akc:npraep", "768": "ppron3:pl:inst:m1:ter:akc:praep", "769": "ppron3:pl:inst:m2:ter:akc:npraep", "770": "ppron3:pl:inst:m2:ter:akc:praep", "771": "ppron3:pl:inst:m3:ter:akc:npraep", "772": "ppron3:pl:inst:m3:ter:akc:praep", "773": "ppron3:pl:inst:n:ter:akc:npraep", "774": "ppron3:pl:inst:n:ter:akc:praep", "775": "ppron3:pl:loc:f:ter:akc:praep", "776": "ppron3:pl:loc:m1:ter:akc:praep", "777": "ppron3:pl:loc:m2:ter:akc:praep", "778": "ppron3:pl:loc:m3:ter:akc:praep", "779": "ppron3:pl:loc:n:ter:akc:praep", "780": "ppron3:pl:nom:f:ter:akc:npraep", "781": "ppron3:pl:nom:f:ter:nakc:npraep", "782": "ppron3:pl:nom:m1:ter:akc:npraep", "783": "ppron3:pl:nom:m2:ter:akc:npraep", "784": "ppron3:pl:nom:m3:ter:akc:npraep", "785": "ppron3:pl:nom:n:ter:akc:npraep", "786": "ppron3:sg:acc:f:ter:akc:npraep", "787": "ppron3:sg:acc:f:ter:akc:praep", "788": "ppron3:sg:acc:m1:ter:akc:npraep", "789": "ppron3:sg:acc:m1:ter:akc:praep", "790": "ppron3:sg:acc:m1:ter:nakc:npraep", "791": "ppron3:sg:acc:m1:ter:nakc:praep", "792": "ppron3:sg:acc:m2:ter:akc:praep", "793": "ppron3:sg:acc:m2:ter:nakc:npraep", "794": "ppron3:sg:acc:m2:ter:nakc:praep", "795": "ppron3:sg:acc:m3:ter:akc:npraep", "796": "ppron3:sg:acc:m3:ter:akc:praep", "797": "ppron3:sg:acc:m3:ter:nakc:npraep", "798": "ppron3:sg:acc:m3:ter:nakc:praep", "799": "ppron3:sg:acc:n:ter:akc:npraep", "800": "ppron3:sg:acc:n:ter:akc:praep", "801": "ppron3:sg:dat:f:ter:akc:npraep", "802": "ppron3:sg:dat:f:ter:akc:praep", "803": "ppron3:sg:dat:m1:ter:akc:npraep", "804": "ppron3:sg:dat:m1:ter:akc:praep", "805": "ppron3:sg:dat:m1:ter:nakc:npraep", "806": "ppron3:sg:dat:m2:ter:akc:npraep", "807": "ppron3:sg:dat:m2:ter:nakc:npraep", "808": "ppron3:sg:dat:m3:ter:akc:npraep", "809": "ppron3:sg:dat:m3:ter:akc:praep", "810": "ppron3:sg:dat:m3:ter:nakc:npraep", "811": "ppron3:sg:dat:n:ter:akc:npraep", "812": "ppron3:sg:dat:n:ter:akc:praep", "813": "ppron3:sg:dat:n:ter:nakc:npraep", "814": "ppron3:sg:gen:f:ter:akc:npraep", "815": "ppron3:sg:gen:f:ter:akc:praep", "816": "ppron3:sg:gen:m1:ter:akc:npraep", "817": "ppron3:sg:gen:m1:ter:akc:praep", "818": "ppron3:sg:gen:m1:ter:nakc:npraep", "819": "ppron3:sg:gen:m1:ter:nakc:praep", "820": "ppron3:sg:gen:m2:ter:akc:npraep", "821": "ppron3:sg:gen:m2:ter:akc:praep", "822": "ppron3:sg:gen:m2:ter:nakc:npraep", "823": "ppron3:sg:gen:m3:ter:akc:npraep", "824": "ppron3:sg:gen:m3:ter:akc:praep", "825": "ppron3:sg:gen:m3:ter:nakc:npraep", "826": "ppron3:sg:gen:m3:ter:nakc:praep", "827": "ppron3:sg:gen:n:ter:akc:npraep", "828": "ppron3:sg:gen:n:ter:akc:praep", "829": "ppron3:sg:gen:n:ter:nakc:npraep", "830": "ppron3:sg:inst:f:ter:akc:praep", "831": "ppron3:sg:inst:m1:ter:akc:npraep", "832": "ppron3:sg:inst:m1:ter:akc:praep", "833": "ppron3:sg:inst:m2:ter:akc:npraep", "834": "ppron3:sg:inst:m2:ter:akc:praep", "835": "ppron3:sg:inst:m3:ter:akc:npraep", "836": "ppron3:sg:inst:m3:ter:akc:praep", "837": "ppron3:sg:inst:n:ter:akc:npraep", "838": "ppron3:sg:inst:n:ter:akc:praep", "839": "ppron3:sg:loc:f:ter:akc:praep", "840": "ppron3:sg:loc:m1:ter:akc:praep", "841": "ppron3:sg:loc:m2:ter:akc:praep", "842": "ppron3:sg:loc:m3:ter:akc:praep", "843": "ppron3:sg:loc:n:ter:akc:praep", "844": "ppron3:sg:nom:f:ter:akc:npraep", "845": "ppron3:sg:nom:f:ter:akc:praep", "846": "ppron3:sg:nom:m1:ter:akc:npraep", "847": "ppron3:sg:nom:m2:ter:akc:npraep", "848": "ppron3:sg:nom:m2:ter:akc:praep", "849": "ppron3:sg:nom:m3:ter:akc:npraep", "850": "ppron3:sg:nom:n:ter:akc:npraep", "851": "praet:pl:f:imperf", "852": "praet:pl:f:perf", "853": "praet:pl:m1:imperf", "854": "praet:pl:m1:imperf:agl", "855": "praet:pl:m1:perf", "856": "praet:pl:m2:imperf", "857": "praet:pl:m2:perf", "858": "praet:pl:m3:imperf", "859": "praet:pl:m3:perf", "860": "praet:pl:n:imperf", "861": "praet:pl:n:perf", "862": "praet:sg:f:imperf", "863": "praet:sg:f:imperf:agl", "864": "praet:sg:f:imperf:nagl", "865": "praet:sg:f:perf", "866": "praet:sg:m1:imperf", "867": "praet:sg:m1:imperf:agl", "868": "praet:sg:m1:imperf:nagl", "869": "praet:sg:m1:perf", "870": "praet:sg:m1:perf:agl", "871": "praet:sg:m1:perf:nagl", "872": "praet:sg:m2:imperf", "873": "praet:sg:m2:imperf:nagl", "874": "praet:sg:m2:perf", "875": "praet:sg:m2:perf:nagl", "876": "praet:sg:m3:imperf", "877": "praet:sg:m3:imperf:nagl", "878": "praet:sg:m3:perf", "879": "praet:sg:m3:perf:nagl", "880": "praet:sg:n:imperf", "881": "praet:sg:n:perf", "882": "pred", "883": "prep:acc", "884": "prep:acc:nwok", "885": "prep:acc:wok", "886": "prep:dat", "887": "prep:gen", "888": "prep:gen:nwok", "889": "prep:gen:wok", "890": "prep:inst", "891": "prep:inst:nwok", "892": "prep:inst:wok", "893": "prep:loc", "894": "prep:loc:nwok", "895": "prep:loc:wok", "896": "prep:nom", "897": "romandig", "898": "siebie:acc", "899": "siebie:dat", "900": "siebie:gen", "901": "siebie:inst", "902": "siebie:loc", "903": "subst:pl:acc:f", "904": "subst:pl:acc:m1", "905": "subst:pl:acc:m1:pt", "906": "subst:pl:acc:m2", "907": "subst:pl:acc:m3", "908": "subst:pl:acc:n:col", "909": "subst:pl:acc:n:ncol", "910": "subst:pl:acc:n:pt", "911": "subst:pl:dat:f", "912": "subst:pl:dat:m1", "913": "subst:pl:dat:m1:pt", "914": "subst:pl:dat:m2", "915": "subst:pl:dat:m3", "916": "subst:pl:dat:n:col", "917": "subst:pl:dat:n:ncol", "918": "subst:pl:dat:n:pt", "919": "subst:pl:gen:f", "920": "subst:pl:gen:m1", "921": "subst:pl:gen:m1:pt", "922": "subst:pl:gen:m2", "923": "subst:pl:gen:m3", "924": "subst:pl:gen:n:col", "925": "subst:pl:gen:n:ncol", "926": "subst:pl:gen:n:pt", "927": "subst:pl:inst:f", "928": "subst:pl:inst:m1", "929": "subst:pl:inst:m1:pt", "930": "subst:pl:inst:m2", "931": "subst:pl:inst:m3", "932": "subst:pl:inst:n:col", "933": "subst:pl:inst:n:ncol", "934": "subst:pl:inst:n:pt", "935": "subst:pl:loc:f", "936": "subst:pl:loc:m1", "937": "subst:pl:loc:m1:pt", "938": "subst:pl:loc:m2", "939": "subst:pl:loc:m3", "940": "subst:pl:loc:n:col", "941": "subst:pl:loc:n:ncol", "942": "subst:pl:loc:n:pt", "943": "subst:pl:nom:f", "944": "subst:pl:nom:m1", "945": "subst:pl:nom:m1:pt", "946": "subst:pl:nom:m2", "947": "subst:pl:nom:m3", "948": "subst:pl:nom:n:col", "949": "subst:pl:nom:n:ncol", "950": "subst:pl:nom:n:pt", "951": "subst:pl:voc:f", "952": "subst:pl:voc:m1", "953": "subst:pl:voc:m1:pt", "954": "subst:pl:voc:m3", "955": "subst:pl:voc:n:col", "956": "subst:pl:voc:n:ncol", "957": "subst:pl:voc:n:pt", "958": "subst:sg:acc:f", "959": "subst:sg:acc:m1", "960": "subst:sg:acc:m2", "961": "subst:sg:acc:m3", "962": "subst:sg:acc:n:col", "963": "subst:sg:acc:n:ncol", "964": "subst:sg:dat:f", "965": "subst:sg:dat:m1", "966": "subst:sg:dat:m2", "967": "subst:sg:dat:m3", "968": "subst:sg:dat:n:col", "969": "subst:sg:dat:n:ncol", "970": "subst:sg:gen:f", "971": "subst:sg:gen:m1", "972": "subst:sg:gen:m2", "973": "subst:sg:gen:m3", "974": "subst:sg:gen:n:col", "975": "subst:sg:gen:n:ncol", "976": "subst:sg:inst:f", "977": "subst:sg:inst:m1", "978": "subst:sg:inst:m2", "979": "subst:sg:inst:m3", "980": "subst:sg:inst:n:col", "981": "subst:sg:inst:n:ncol", "982": "subst:sg:loc:f", "983": "subst:sg:loc:m1", "984": "subst:sg:loc:m2", "985": "subst:sg:loc:m3", "986": "subst:sg:loc:n:col", "987": "subst:sg:loc:n:ncol", "988": "subst:sg:nom:f", "989": "subst:sg:nom:m1", "990": "subst:sg:nom:m2", "991": "subst:sg:nom:m3", "992": "subst:sg:nom:n:col", "993": "subst:sg:nom:n:ncol", "994": "subst:sg:voc:f", "995": "subst:sg:voc:m1", "996": "subst:sg:voc:m2", "997": "subst:sg:voc:m3", "998": "subst:sg:voc:n:col", "999": "subst:sg:voc:n:ncol", "1000": "sym", "1001": "winien:pl:f:imperf", "1002": "winien:pl:m1:imperf", "1003": "winien:pl:m2:imperf", "1004": "winien:pl:m3:imperf", "1005": "winien:pl:n:imperf", "1006": "winien:sg:f:imperf", "1007": "winien:sg:m1:imperf", "1008": "winien:sg:m2:imperf", "1009": "winien:sg:m3:imperf", "1010": "winien:sg:n:imperf", "1011": "xxs:acc", "1012": "xxs:dat", "1013": "xxs:gen", "1014": "xxs:inst", "1015": "xxs:loc", "1016": "xxs:nom", "1017": "xxs:voc", "1018": "xxx"}}}}, {"name": "nps", "sequence": "bool"}, {"name": "nkjp_ids", "sequence": "string"}], "config_name": "nkjp1m", "splits": [{"name": "test", "num_bytes": 8324533, "num_examples": 8964}, {"name": "train", "num_bytes": 65022406, "num_examples": 68943}, {"name": "validation", "num_bytes": 7465442, "num_examples": 7755}], "download_size": 16167009, "dataset_size": 80812381}} | 2022-12-07T16:47:51+00:00 |
a09f12672eb54455d73f6c4dd23f728900100cf7 | # Dataset Card for "fa-paraphrase"
This dataset contains over 1.1 million rows. Each row contains a pair of Farsi sentences which are a paraphrase of each other. The datasets used to create this dataset can be found here:
* [tapaco](https://huggingface.co/datasets/tapaco)
* [kaggle](https://www.kaggle.com/datasets/armannikkhah/persian-paraphrase-dataset)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alighasemi/fa-paraphrase | [
"region:us"
]
| 2022-12-07T17:13:10+00:00 | {"Tasks": ["Text2Text Generation"], "Fine-Grained Tasks": ["paraphrase", "query-paraphrasing"], "Languages": ["Persian"], "Multilinguality": ["monolingual", "fa", "fa-IR"], "Sizes": ["n>1M"], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 139373682.4, "num_examples": 881408}, {"name": "test", "num_bytes": 17421710.3, "num_examples": 110176}, {"name": "validation", "num_bytes": 17421710.3, "num_examples": 110176}], "download_size": 98032993, "dataset_size": 174217103.00000003}} | 2022-12-07T17:33:45+00:00 |
b076bba0febaf05f803521f1441d533651d23858 | society-ethics/laion2b_100k_religion | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-07T17:26:29+00:00 | {"license": "cc-by-4.0"} | 2022-12-07T17:27:32+00:00 |
|
e9c5fce8b29f5d64e1ee45ce9222b1269ac6e729 | # Dataset Card for "medmcqa_age_gender_custom"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | society-ethics/medmcqa_age_gender_custom | [
"region:us"
]
| 2022-12-07T18:29:43+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "opa", "dtype": "string"}, {"name": "opb", "dtype": "string"}, {"name": "opc", "dtype": "string"}, {"name": "opd", "dtype": "string"}, {"name": "cop", "dtype": {"class_label": {"names": {"0": "a", "1": "b", "2": "c", "3": "d"}}}}, {"name": "choice_type", "dtype": "string"}, {"name": "exp", "dtype": "string"}, {"name": "subject_name", "dtype": "string"}, {"name": "topic_name", "dtype": "string"}, {"name": "age.infant", "dtype": "bool"}, {"name": "age.child_preschool", "dtype": "bool"}, {"name": "age.child", "dtype": "bool"}, {"name": "age.adolescent", "dtype": "bool"}, {"name": "age.adult", "dtype": "bool"}, {"name": "age.middle_aged", "dtype": "bool"}, {"name": "age.aged", "dtype": "bool"}, {"name": "age.aged_80_over", "dtype": "bool"}, {"name": "gender.male", "dtype": "bool"}, {"name": "gender.female", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 132131827, "num_examples": 182822}], "download_size": 86345498, "dataset_size": 132131827}} | 2022-12-07T18:30:06+00:00 |
620b1279d40e6081d8f782318268dc082b9c945b | # Dataset Card for "olm-bookcorpus-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-bookcorpus-tokenized-1024 | [
"region:us"
]
| 2022-12-07T19:28:08+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 8534733804.0, "num_examples": 1386409}], "download_size": 2291578601, "dataset_size": 8534733804.0}} | 2022-12-07T19:33:36+00:00 |
1e368cd93d5614ee6a7687060cbe7b598491b4ff |
Please note this dataset is private
### Using the data
You can stream the data data loader:
```python
myaudio = load_dataset(
"evageon/myaudio",
use_auth_token=os.environ["HG_USER_TOKEN"], # replace this with your access token
streaming=True)
```
Then you can iterate over the dataset
```python
# replace test with validation or train depending on split you need
print(next(iter(myaudio["test"])))
```
outputs:
```
{'path': 'CD93A8FF-C3ED-4AD4-95A6-8363CCB93B90_spk-0001_seg-0024467:0025150.wav', 'audio': {'path': 'dataset/test/wav/CD93A8FF-C3ED-4AD4-95A6-8363CCB93B90_spk-0001_seg-0024467:0025150.wav', 'array': array([0.00662231, 0.00497437, 0.00518799, ..., 0.01150513, 0.00708008,
0.00296021]), 'sampling_rate': 16000}, 'text': 'خطرا على دول الخليج لماذا اعتبرت أن إيران اليوم والخطر الذي تشكله إيران مختلف'}
``` | evageon/myaudio | [
"region:us"
]
| 2022-12-07T20:56:35+00:00 | {"pretty_name": "MGB2", "alt_glob": [], "alt_sep": "", "dataset_link": "https://huggingface.co/datasets/malmarz/test_mgb2/resolve/main/mgb2.test.tar.gz", "dataset_name": "mgb2_speech", "datasets_path": "datasets", "file_type": "wav", "hf_path": "", "label_column_name": "", "lines": false, "local_dir": false, "new_columns": "", "pal": false, "skiprows": 0, "squad": false, "xml_columns": "", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 384372257.17, "num_examples": 3890}, {"name": "train", "num_bytes": 1488884786.673, "num_examples": 15559}], "download_size": 0, "dataset_size": 1873257043.8430002}} | 2022-12-08T15:28:56+00:00 |
81fa681e6a216879457e50b1821e25731e0ec6b7 | # Dataset Card for "cantonese_processed_guangzhou"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tilos/cantonese_processed_guangzhou | [
"region:us"
]
| 2022-12-07T21:32:19+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4652712704, "num_examples": 4844}], "download_size": 659265457, "dataset_size": 4652712704}} | 2022-12-08T16:19:05+00:00 |
95adf35dffd9afd51b326b417c8aae8ba281f7d6 | # Dataset Card for "dlforcv"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kyu-kang/dlforcv | [
"region:us"
]
| 2022-12-07T21:38:38+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 67733111.0, "num_examples": 107}], "download_size": 67131614, "dataset_size": 67733111.0}} | 2022-12-07T22:10:14+00:00 |
1e2c092fa44a9779a5405c7cdc85a7454393bff1 | # Dataset Card for "id2223_whisper_swedish_augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jsra2/id2223_whisper_swedish_augmented | [
"region:us"
]
| 2022-12-07T21:45:15+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11871603408, "num_examples": 12360}, {"name": "test", "num_bytes": 4868697560, "num_examples": 5069}], "download_size": 2532495364, "dataset_size": 16740300968}} | 2022-12-07T21:50:16+00:00 |
de69ac023094ff0488283b63bf0d3b10a09a8b23 |
﷽
# Dataset Card for Tarteel AI's EveryAyah Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tarteel AI](https://www.tarteel.ai/)
- **Repository:** [Needs More Information]
- **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:[email protected])
### Dataset Summary
This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The audio is in Arabic.
## Dataset Structure
### Data Instances
A typical data point comprises the audio file `audio`, and its transcription called `text`.
The `duration` is in seconds, and the author is `reciter`.
An example from the dataset is:
```
{
'audio': {
'path': None,
'array': array([ 0. , 0. , 0. , ..., -0.00057983,
-0.00085449, -0.00061035]),
'sampling_rate': 16000
},
'duration': 6.478375,
'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ',
'reciter': 'abdulsamad'
}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: The transcription of the audio file.
- duration: The duration of the audio file.
- reciter: The reciter of the verses.
### Data Splits
| | Train | Test | Validation |
| ----- | ----- | ---- | ---------- |
| dataset | 187785 | 23473 | 23474 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
```
### Contributions
This dataset was created by:
| tarteel-ai/everyayah | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:mit",
"region:us"
]
| 2022-12-07T21:53:59+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ar"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "paperswithcode_id": "tarteel-everyayah", "pretty_name": "Tarteel AI - EveryAyah Dataset", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "duration", "dtype": "float64"}, {"name": "text", "dtype": "string"}, {"name": "reciter", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 262627688145.3, "num_examples": 187785}, {"name": "test", "num_bytes": 25156009734.72, "num_examples": 23473}, {"name": "validation", "num_bytes": 23426886730.218, "num_examples": 23474}], "download_size": 117190597305, "dataset_size": 311210584610.23804}, "train-eval-index": [{"config": "clean", "task": "automatic-speech-recognition", "task_id": "speech_recognition", "splits": {"train_split": "train", "eval_split": "test", "validation_split": "validation"}, "col_mapping": {"audio": "audio", "text": "text", "reciter": "text"}, "metrics": [{"type": "wer", "name": "WER"}, {"type": "cer", "name": "CER"}]}]} | 2022-12-09T19:33:08+00:00 |
d1a894fc30563abd922a21c5baa99a4c17b49d92 | # Dataset Card for "olm-october-2022-with-bookcorpus-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-with-bookcorpus-tokenized-1024 | [
"region:us"
]
| 2022-12-07T22:14:29+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 88003461204, "num_examples": 14295559}], "download_size": 23317060365, "dataset_size": 88003461204}} | 2022-12-07T22:55:04+00:00 |
d798e79ecaf1390d68bb3e543356e62ca4cf2039 | # Dataset Card for "banking_sentiment_zs_gpt3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/banking_sentiment_zs_gpt3 | [
"region:us"
]
| 2022-12-07T22:16:36+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "null"}, {"name": "inputs", "struct": [{"name": "explanation", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "prediction", "list": [{"name": "label", "dtype": "string"}, {"name": "score", "dtype": "float64"}]}, {"name": "prediction_agent", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "annotation_agent", "dtype": "string"}, {"name": "multi_label", "dtype": "bool"}, {"name": "explanation", "dtype": "null"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "dtype": "null"}, {"name": "status", "dtype": "string"}, {"name": "event_timestamp", "dtype": "null"}, {"name": "metrics", "struct": [{"name": "text_length", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 11113, "num_examples": 36}], "download_size": 10768, "dataset_size": 11113}} | 2022-12-07T22:16:43+00:00 |
75d7288c6e598ddb3c740f64ec8fe633bade016f | ## Dataset Description
Dataset containing 1453 images of pixel-perfect vector Pokémon.
## Source Data
Files taken from [Pokémon icons](https://theartificial.github.io/pokemon-icons/#download). | matteopilotto/pokemon_cute | [
"region:us"
]
| 2022-12-07T22:30:07+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6031856.525, "num_examples": 1453}], "download_size": 6196888, "dataset_size": 6031856.525}} | 2022-12-07T22:45:43+00:00 |
425686a02827b3de1f30c72a510baebf3f4f7453 | # Dataset Card for "aop-dataset-2022-11-10-interview-lines-by-youtube"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| ThingsThatDoStuff/aop-dataset-2022-11-10-interview-lines-by-youtube | [
"region:us"
]
| 2022-12-08T01:31:02+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "start", "dtype": "int64"}, {"name": "duration", "dtype": "int64"}, {"name": "name", "dtype": "string"}, {"name": "chars", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "mfS", "dtype": "string"}, {"name": "mfDe", "dtype": "string"}, {"name": "S1", "dtype": "string"}, {"name": "S2", "dtype": "string"}, {"name": "A1", "dtype": "string"}, {"name": "A2", "dtype": "string"}, {"name": "A3", "dtype": "string"}, {"name": "A4", "dtype": "string"}, {"name": "ST-S", "dtype": "int64"}, {"name": "SF-S", "dtype": "int64"}, {"name": "NT-S", "dtype": "int64"}, {"name": "NF-S", "dtype": "int64"}, {"name": "Jumper", "dtype": "int64"}, {"name": "non-Jumper", "dtype": "int64"}, {"name": "PB", "dtype": "int64"}, {"name": "SC", "dtype": "int64"}, {"name": "SB", "dtype": "int64"}, {"name": "PC", "dtype": "int64"}, {"name": "type-128", "dtype": "string"}, {"name": "type-32", "dtype": "string"}, {"name": "stack", "dtype": "string"}, {"name": "S quadra", "dtype": "string"}, {"name": "O/D", "dtype": "string"}, {"name": "Oi/Oe", "dtype": "string"}, {"name": "Di/De", "dtype": "string"}, {"name": "S/P", "dtype": "string"}, {"name": "C/B", "dtype": "string"}, {"name": "dom", "dtype": "string"}, {"name": "temp", "dtype": "string"}, {"name": "O/DD", "dtype": "int64"}, {"name": "D/OO", "dtype": "int64"}, {"name": "Aee", "dtype": "int64"}, {"name": "Aii", "dtype": "int64"}, {"name": "mN", "dtype": "int64"}, {"name": "mS", "dtype": "int64"}, {"name": "mDi", "dtype": "int64"}, {"name": "mDe", "dtype": "int64"}, {"name": "fS", "dtype": "int64"}, {"name": "fN", "dtype": "int64"}, {"name": "fDe", "dtype": "int64"}, {"name": "fDi", "dtype": "int64"}, {"name": "sav S", "dtype": "int64"}, {"name": "sav N", "dtype": "int64"}, {"name": "sav T", "dtype": "int64"}, {"name": "sav F", "dtype": "int64"}, {"name": "dem S", "dtype": "int64"}, {"name": "dem N", "dtype": "int64"}, {"name": "dem T", "dtype": "int64"}, {"name": "dem F", "dtype": "int64"}, {"name": "has Si", "dtype": "int64"}, {"name": "has Ne", "dtype": "int64"}, {"name": "has Ni", "dtype": "int64"}, {"name": "has Se", "dtype": "int64"}, {"name": "has Ti", "dtype": "int64"}, {"name": "has Fe", "dtype": "int64"}, {"name": "has Fi", "dtype": "int64"}, {"name": "has Te", "dtype": "int64"}, {"name": "sav Oi", "dtype": "int64"}, {"name": "sav Di", "dtype": "int64"}, {"name": "sav Oe", "dtype": "int64"}, {"name": "sav De", "dtype": "int64"}, {"name": "dem Oe", "dtype": "int64"}, {"name": "dem De", "dtype": "int64"}, {"name": "dem Oi", "dtype": "int64"}, {"name": "dem Di", "dtype": "int64"}, {"name": "sav Play", "dtype": "int64"}, {"name": "sav Blast", "dtype": "int64"}, {"name": "sav Consume", "dtype": "int64"}, {"name": "sav Sleep", "dtype": "int64"}, {"name": "dem Play", "dtype": "int64"}, {"name": "dem Blast", "dtype": "int64"}, {"name": "dem Consume", "dtype": "int64"}, {"name": "dem Sleep", "dtype": "int64"}, {"name": "has Play", "dtype": "int64"}, {"name": "has Blast", "dtype": "int64"}, {"name": "has Consume", "dtype": "int64"}, {"name": "has Sleep", "dtype": "int64"}, {"name": "last Play", "dtype": "int64"}, {"name": "last Blast", "dtype": "int64"}, {"name": "last Consume", "dtype": "int64"}, {"name": "last Sleep", "dtype": "int64"}, {"name": "dom in", "dtype": "int64"}, {"name": "dom en", "dtype": "int64"}, {"name": "sav ST", "dtype": "int64"}, {"name": "sav SF", "dtype": "int64"}, {"name": "sav NT", "dtype": "int64"}, {"name": "sav NF", "dtype": "int64"}, {"name": "line_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86092404, "num_examples": 82290}, {"name": "test", "num_bytes": 28744341, "num_examples": 27430}, {"name": "valid", "num_bytes": 28700636, "num_examples": 27430}], "download_size": 32536937, "dataset_size": 143537381}} | 2022-12-08T01:51:52+00:00 |
47633e13b2f9aaa438c2dc7122b0e055fdc06176 | cmudrc/publications | [
"multilinguality:monolingual",
"size_categories:100<n<1K",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
]
| 2022-12-08T01:51:57+00:00 | {"language": "en", "license": "other", "multilinguality": ["monolingual"], "size_categories": ["100<n<1K"], "source_datasets": ["original"]} | 2022-12-08T21:30:46+00:00 |
|
191e21adb653e244098280ff5c201bd22daf10cb | # Dataset Card for "image3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Steven0633/image3 | [
"region:us"
]
| 2022-12-08T02:00:02+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "arrange chairs", "1": "arrange flowers", "2": "bake potato", "3": "beat eggs", "4": "bend knee", "5": "bend tree", "6": "bind hair", "7": "bite apple", "8": "block door", "9": "block window", "10": "boil egg", "11": "boil potato", "12": "break bowl", "13": "break cup", "14": "break door", "15": "break egg", "16": "break glass", "17": "break window", "18": "burn book", "19": "burn paper", "20": "burn tree", "21": "burn wood", "22": "burst balloon", "23": "burst door", "24": "carry bag", "25": "carry book", "26": "carry umbrella", "27": "chop carrot", "28": "chop meat", "29": "chop onion", "30": "chop tree", "31": "chop wood", "32": "close book", "33": "close cabinet", "34": "close door", "35": "close drawer", "36": "close window", "37": "coil rope", "38": "cook egg", "39": "cook meat", "40": "cook onion", "41": "cook potato", "42": "crack bottle", "43": "crack egg", "44": "crack glass", "45": "crack window", "46": "crash car", "47": "crop hair", "48": "cut apple", "49": "cut meat", "50": "cut onion", "51": "cut potato", "52": "cut tree", "53": "cut wood", "54": "fasten door", "55": "fasten window", "56": "fold paper", "57": "fry egg", "58": "fry meat", "59": "fry potato", "60": "grate carrot", "61": "grate potato", "62": "grind meat", "63": "hang bag", "64": "hang shirt", "65": "ignite paper", "66": "ignite wood", "67": "insert key", "68": "kick door", "69": "kick football", "70": "knot rope", "71": "label bottle", "72": "label box", "73": "lock cabinet", "74": "lock door", "75": "lock drawer", "76": "lock window", "77": "mash potato", "78": "mix eggs", "79": "open bottle", "80": "open box", "81": "open cabinet", "82": "open door", "83": "open drawer", "84": "open umbrella", "85": "open window", "86": "park car", "87": "peel apple", "88": "peel banana", "89": "peel carrot", "90": "peel orange", "91": "peel potato", "92": "pile books", "93": "pile boxes", "94": "pile wood", "95": "pitch baseball", "96": "ride bicycle", "97": "rip paper", "98": "roll paper", "99": "roll umbrella", "100": "saw tree", "101": "saw wood", "102": "scratch car", "103": "scratch knee", "104": "shave hair", "105": "shut door", "106": "shut window", "107": "skin knee", "108": "slice apple", "109": "slice meat", "110": "slice onion", "111": "slice potato", "112": "smash door", "113": "smash window", "114": "soak hair", "115": "soak shirt", "116": "spill coffee", "117": "split tree", "118": "split wood", "119": "squeeze bottle", "120": "squeeze orange", "121": "stain paper", "122": "stain shirt", "123": "stir coffee", "124": "stir soup", "125": "strip tree", "126": "tear book", "127": "tear paper", "128": "tear shirt", "129": "throw apple", "130": "throw baseball", "131": "throw football", "132": "throw frisbee", "133": "tie shoe", "134": "trim hair", "135": "trim tree", "136": "twist hair", "137": "twist rope", "138": "wrap book", "139": "wrap box"}}}}], "splits": [{"name": "train", "num_bytes": 14000672.0, "num_examples": 420}], "download_size": 13171812, "dataset_size": 14000672.0}} | 2022-12-08T02:20:52+00:00 |
78fafd897851c9d4e0879a8572a920fd90604829 | # Dataset Card for "final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kyu-kang/final | [
"region:us"
]
| 2022-12-08T03:38:12+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 68391793.0, "num_examples": 107}], "download_size": 0, "dataset_size": 68391793.0}} | 2022-12-08T03:40:00+00:00 |
9c82f7903d64fef28ff36a409f2f6a99c2083b65 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-xl-16384-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-30a180-2375174515 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T06:06:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-xl-16384-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-12-08T17:09:09+00:00 |
8ec6ef8b2770574f99269e82cd442b10186db4d3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-xl-16384-book-summary
* Dataset: ccdv/arxiv-summarization
* Config: document
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-ccdv__arxiv-summarization-document-dcd037-2375274516 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T06:06:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ccdv/arxiv-summarization"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-xl-16384-book-summary", "metrics": ["bertscore", "perplexity"], "dataset_name": "ccdv/arxiv-summarization", "dataset_config": "document", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}} | 2022-12-09T01:24:32+00:00 |
a971f52a130383fcf25dd5c6e5d57f18c273eece |
# SMC Malayalam Speech Corpus
Malayalam Speech Corpus (MSC) is a repository of curated speech samples collected using MSC web application, released by Swathanthra Malayalam Computing.
The official blog post and source data can be found at [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/).
## Dataset Description
- **Homepage:** [https://blog.smc.org.in/malayalam-speech-corpus/](https://blog.smc.org.in/malayalam-speech-corpus/)
### Dataset Summary
The first version of Malayalam Speech Corpus contains 1541 speech samples from 75 contributors amounting to 1:38:16 hours of speech. It has 482 unique sentences, 1400 unique words, 553 unique syllables and 48 unique phonemes.
| thennal/msc | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:ml",
"license:cc-by-sa-4.0",
"region:us"
]
| 2022-12-08T06:19:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ml"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "Swathanthra Malayalam Computing Malayalam Speech Corpus", "tags": [], "dataset_info": {"features": [{"name": "speechid", "dtype": "string"}, {"name": "speaker_id", "dtype": "string"}, {"name": "review_score", "dtype": "int64"}, {"name": "transcript", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "speaker_gender", "dtype": "string"}, {"name": "speaker_age", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}], "splits": [{"name": "train", "num_bytes": 581998721.306, "num_examples": 1541}], "download_size": 422643542, "dataset_size": 581998721.306}} | 2022-12-08T06:49:31+00:00 |
0b6233b2eb00c57dfb4cee8acedfce128e0133b8 | # Dataset Card for "conv_intent_w_inference"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rungalileo/conv_intent_w_inference | [
"region:us"
]
| 2022-12-08T06:20:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "PlayMusic", "1": "BookRestaurant", "2": "GetWeather", "3": "SearchCreativeWork", "4": "AddToPlaylist", "5": "RateBook", "6": "SearchScreeningEvent"}}}}, {"name": "str_label", "dtype": "string"}], "splits": [{"name": "inference_example", "num_bytes": 230847, "num_examples": 2723}, {"name": "train", "num_bytes": 856479, "num_examples": 11061}, {"name": "validation", "num_bytes": 56192, "num_examples": 700}], "download_size": 491008, "dataset_size": 1143518}} | 2022-12-08T06:20:21+00:00 |
1345f29321e8020aa4a73e39be9d5bbd27b75b46 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-arxiv
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-342ce6-2376174529 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T06:51:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-12-08T07:39:24+00:00 |
d3d7410306f754c7a581423d057ff04a38ba1648 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-pubmed
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-29029e-2376274530 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T06:51:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-12-08T07:39:56+00:00 |
cee2f2d4c0f18879213e5029b3c35576fb9422c8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-bigpatent
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-29029e-2376274531 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T06:51:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-12-08T07:40:11+00:00 |
502e746a9d9640db3f706b4416209404ada6a5b1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/bigbird-pegasus-large-bigpatent
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sam-mosaic](https://huggingface.co/sam-mosaic) for evaluating this model. | autoevaluate/autoeval-eval-billsum-default-e23aac-2376574533 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T07:06:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-12-08T08:51:32+00:00 |
ea72d5e0f10537b11919e379e4b8e88b179a62ad | # Indic TTS Malayalam Speech Corpus
The Malayalam subset of [Indic TTS Corpus](https://www.iitm.ac.in/donlab/tts/index.php), taken from
[this Kaggle database.](https://www.kaggle.com/datasets/kavyamanohar/indic-tts-malayalam-speech-corpus) The corpus contains
one male and one female speaker, with a 2:1 ratio of samples due to missing files for the female speaker. The license is given
in the repository. | thennal/indic_tts_ml | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:ml",
"license:other",
"region:us"
]
| 2022-12-08T07:18:47+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["ml"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-to-speech", "automatic-speech-recognition"], "task_ids": [], "pretty_name": "Indic TTS Malayalam Speech Corpus", "dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}, {"name": "gender", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4830182115.4, "num_examples": 8600}], "download_size": 3966895730, "dataset_size": 4830182115.4}, "tags": []} | 2022-12-08T20:23:33+00:00 |
9eaeeeb72ba952a3bb539780a1ee012b1aa1a2e5 | zhangzi/arknights | [
"license:openrail",
"region:us"
]
| 2022-12-08T08:12:23+00:00 | {"license": "openrail"} | 2022-12-08T08:12:23+00:00 |
|
cb310ffbe09bccb23393b94bad25267a03d5766a | # Dataset Card for "competency_games"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joshr/competency_games | [
"region:us"
]
| 2022-12-08T08:46:34+00:00 | {"dataset_info": {"features": [{"name": "description", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "sg_train", "num_bytes": 51120000, "num_examples": 90000}, {"name": "sg_validation", "num_bytes": 2726400, "num_examples": 4800}, {"name": "sg_test", "num_bytes": 2726400, "num_examples": 4800}, {"name": "sg_llm_eval", "num_bytes": 272640, "num_examples": 480}, {"name": "sg_exemplar", "num_bytes": 6816, "num_examples": 12}, {"name": "cc_train", "num_bytes": 119621976, "num_examples": 90000}, {"name": "cc_validation", "num_bytes": 6379974, "num_examples": 4800}, {"name": "cc_test", "num_bytes": 6380186, "num_examples": 4800}, {"name": "cc_llm_eval", "num_bytes": 637971, "num_examples": 480}, {"name": "cc_exemplar", "num_bytes": 15965, "num_examples": 12}, {"name": "hn_train", "num_bytes": 183955709, "num_examples": 90000}, {"name": "hn_validation", "num_bytes": 9810256, "num_examples": 4800}, {"name": "hn_test", "num_bytes": 9809441, "num_examples": 4800}, {"name": "hn_llm_eval", "num_bytes": 981379, "num_examples": 480}, {"name": "hn_exemplar", "num_bytes": 24478, "num_examples": 12}, {"name": "ring_train", "num_bytes": 79429571, "num_examples": 90000}, {"name": "ring_validation", "num_bytes": 4236560, "num_examples": 4800}, {"name": "ring_test", "num_bytes": 4236287, "num_examples": 4800}, {"name": "ring_llm_eval", "num_bytes": 423614, "num_examples": 480}, {"name": "ring_exemplar", "num_bytes": 10597, "num_examples": 12}, {"name": "voting_train", "num_bytes": 48250182, "num_examples": 90000}, {"name": "voting_validation", "num_bytes": 2573367, "num_examples": 4800}, {"name": "voting_test", "num_bytes": 2573448, "num_examples": 4800}, {"name": "voting_llm_eval", "num_bytes": 257391, "num_examples": 480}, {"name": "voting_exemplar", "num_bytes": 6430, "num_examples": 12}], "download_size": 93750310, "dataset_size": 536467038}} | 2022-12-08T08:53:30+00:00 |
cf1cb0abe4527fdfbc52091de4f81ae04388f8a1 | Ravisahu06/modelface | [
"license:mit",
"region:us"
]
| 2022-12-08T09:27:07+00:00 | {"license": "mit"} | 2022-12-08T09:28:37+00:00 |
|
c496e013c584ceab9e834b2a93e1cd9766b9eb1b |
# Dataset Card for Deezer ego nets
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://snap.stanford.edu/data/deezer_ego_nets.html)**
- **Paper:**: (see citation)
### Dataset Summary
The Deezer ego nets dataset contains ego-nets of Eastern European users collected from the music streaming service Deezer in February 2020. Nodes are users and edges are mutual follower relationships.
### Supported Tasks and Leaderboards
The related task is the binary classification to predict gender for the ego node in the graph.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under GPL-3.0 license.
### Citation Information
See also [github](https://github.com/benedekrozemberczki/karateclub).
```
@inproceedings{karateclub,
title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
year = {2020},
pages = {3125–3132},
booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
organization = {ACM},
}
``` | graphs-datasets/deezer_ego_nets | [
"task_categories:graph-ml",
"license:gpl-3.0",
"region:us"
]
| 2022-12-08T09:38:31+00:00 | {"license": "gpl-3.0", "task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:36:48+00:00 |
8c636656950ac009470d4af3829e4db538bdd507 |
# Dataset Card for Reddit threads
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://snap.stanford.edu/data/reddit_threads.html)**
- **Paper:**: (see citation)
### Dataset Summary
The `Reddit threads` dataset contains 'discussion and non-discussion based threads from Reddit which we collected in May 2018. Nodes are Reddit users who participate in a discussion and links are replies between them' (doc).
### Supported Tasks and Leaderboards
The related task is the binary classification to predict whether a thread is discussion based or not.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Dataset information
- 203,088 graphs
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under GPL-3.0 license.
### Citation Information
See also [github](https://github.com/benedekrozemberczki/karateclub).
```
@inproceedings{karateclub,
title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
year = {2020},
pages = {3125–3132},
booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
organization = {ACM},
}
``` | graphs-datasets/reddit_threads | [
"task_categories:graph-ml",
"license:gpl-3.0",
"region:us"
]
| 2022-12-08T09:40:25+00:00 | {"license": "gpl-3.0", "task_categories": ["graph-ml"]} | 2023-02-07T16:36:39+00:00 |
c01d97cec814c6b18e476178ea10792b7b88ab14 |
# Dataset Card for Twitch ego nets
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://snap.stanford.edu/data/twitch_ego_nets.html)**
- **Paper:**: (see citation)
### Dataset Summary
The `Twitch ego nets` dataset contains ' ego-nets of Twitch users who participated in the partnership program in April 2018. Nodes are users and links are friendships.' (doc).
### Supported Tasks and Leaderboards
The related task is the binary classification to predict whether a user plays a single or multple games.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Dataset information
- 127,094 graphs
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under GPL-3.0 license.
### Citation Information
See also [github](https://github.com/benedekrozemberczki/karateclub).
```
@inproceedings{karateclub,
title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}},
author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar},
year = {2020},
pages = {3125–3132},
booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)},
organization = {ACM},
}
``` | graphs-datasets/twitch_egos | [
"task_categories:graph-ml",
"license:gpl-3.0",
"region:us"
]
| 2022-12-08T09:42:51+00:00 | {"license": "gpl-3.0", "task_categories": ["graph-ml"]} | 2023-02-07T16:36:30+00:00 |
2eb859f596c3fca48a7bc13ea0681b1e59d73fae | wheart/web3test1 | [
"license:openrail",
"region:us"
]
| 2022-12-08T09:46:43+00:00 | {"license": "openrail"} | 2022-12-08T09:50:14+00:00 |
|
8aa98e62b05c881c9e22005fa662d72f0426ec7f |
# Dataset Card for MNIST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The `MNIST` dataset consists of 55000 images in 10 classes, represented as graphs. It comes from a computer vision dataset.
### Supported Tasks and Leaderboards
`MNIST` should be used for multiclass graph classification.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 55,000 |
| average #nodes | 70.6 |
| average #edges | 564.5 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
- `pos` (list: 2 x #node): positional information of each node
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | graphs-datasets/MNIST | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
]
| 2022-12-08T09:52:08+00:00 | {"license": "mit", "task_categories": ["graph-ml"]} | 2023-02-07T16:37:15+00:00 |
c3d5c7aa3321c405b2818dc5f89125a84e28c7e4 |
# Dataset Card for CIFAR10
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The `CIFAR10` dataset consists of 45000 images in 10 classes, represented as graphs.
### Supported Tasks and Leaderboards
`CIFAR10` should be used for multiclass graph classification.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 45,000 |
| average #nodes | 117.6 |
| average #edges | 941.2 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
- `pos` (list: 2 x #node): positional information of each node
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | graphs-datasets/CIFAR10 | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
]
| 2022-12-08T09:59:00+00:00 | {"license": "mit", "task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:37:24+00:00 |
ad81d3f084a7d576fa3e1b0c543a25d1654550d2 | jmpinero/prueba | [
"region:us"
]
| 2022-12-08T10:01:41+00:00 | {} | 2022-12-08T10:09:25+00:00 |
|
5c57b14ae6f8d48ecdf72a15dda64597bfa7d3ad |
# Dataset Card for CSK
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The CSL dataset is a synthetic dataset, to test GNN expressivity.
### Supported Tasks and Leaderboards
`CSL` should be used for binary graph classification, on isomoprhism or not.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 150 |
| average #nodes | 41.0 |
| average #edges | 164.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | graphs-datasets/CSL | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
]
| 2022-12-08T10:03:06+00:00 | {"license": "mit", "task_categories": ["graph-ml"]} | 2023-02-07T16:37:07+00:00 |
df19b007b610be45bbb910678e83fd87db9c78c2 | # Dataset Card for "mnist_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ansa00/mnist_images | [
"region:us"
]
| 2022-12-08T11:07:40+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "splits": [{"name": "train", "num_bytes": 594553015.3377689, "num_examples": 51021}, {"name": "test", "num_bytes": 104426905.66123116, "num_examples": 9004}], "download_size": 523900419, "dataset_size": 698979920.9990001}} | 2022-12-08T11:08:36+00:00 |
c59a3a11d2b192ec4fcc667f6c000c8436f15fd2 | # Dataset Card for OrangeSum
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [OrangeSum repository](https://github.com/Tixierae/OrangeSum)
- **Paper:** [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
- **Point of Contact:** [Antoine J.-P. Tixier]([email protected])
### Dataset Summary
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous.
Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
### Supported Tasks and Leaderboards
**Tasks:** OrangeSum Title and OrangeSum Abstract.
To this day, there is no Leaderboard for this dataset.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
A data instance consists of a news article and a summary. The summary can be a short abstract or a title depending on the configuration.
Example:
**Document:** Le temps sera pluvieux sur huit départements de la France ces prochaines heures : outre les trois départements bretons placés en vigilance orange jeudi matin, cinq autres départements du sud du Massif Central ont été à leur tour placés en alerte orange pluie et inondation. Il s'agit de l'Aveyron, du Cantal, du Gard, de la Lozère, et de la Haute-Loire. Sur l'ensemble de l'épisode, les cumuls de pluies attendus en Bretagne sont compris entre 40 et 60 mm en 24 heures et peuvent atteindre localement les 70 mm en 24 heures.Par la suite, la dégradation qui va se mettre en place cette nuit sur le Languedoc et le sud du Massif Central va donner sur l'Aveyron une première salve intense de pluie. Des cumuls entre 70 et 100 mm voir 120 mm localement sont attendus sur une durée de 24 heures. Sur le relief des Cévennes on attend de 150 à 200 mm, voire 250 mm très ponctuellement sur l'ouest du Gard et l'est de la Lozère. Cet épisode va s'estomper dans la soirée avec le décalage des orages vers les régions plus au nord. Un aspect orageux se mêlera à ces précipitations, avec de la grêle possible, des rafales de vent et une forte activité électrique.
**Abstract:** Outre les trois départements bretons, cinq autres départements du centre de la France ont été placés en vigilance orange pluie-inondation.
**Title:** Pluie-inondations : 8 départements en alerte orange.
### Data Fields
`text`: the document to be summarized. \
`summary`: the summary of the source document.
### Data Splits
The data is split into a training, validation and test in both configuration.
| | train | validation | test |
|----------|------:|-----------:|-----:|
| Abstract | 21400 | 1500 | 1500 |
| Title | 30658 | 1500 | 1500 |
## Dataset Creation
### Curation Rationale
The goal here was to create a French equivalent of the recently introduced [XSum](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset) dataset. Unlike the historical summarization datasets, CNN, DailyMail, and NY Times, which favor extractive strategies, XSum, as well as OrangeSum require the models to display a high degree of abstractivity to perform well. The summaries in OrangeSum are not catchy headlines, but rather capture the gist of the articles.
### Source Data
#### Initial Data Collection and Normalization
Each article features a single-sentence title as well as a very brief abstract. Extracting these two fields from each news article page, creates two summarization tasks: OrangeSum Title and OrangeSum Abstract. As a post-processing step, all empty articles and those whose summaries were shorter than 5 words were removed. For OrangeSum Abstract, the top 10% articles in terms of proportion of novel unigrams in the abstracts were removed, as it was observed that such abstracts tend to be introductions rather than real abstracts. This corresponded to a threshold of 57% novel unigrams. For both OrangeSum Title and OrangeSum Abstract, 1500 pairs for testing and 1500 for validation are set aside, and all the remaining ones are used for training.
#### Who are the source language producers?
The authors of the artiles.
### Annotations
#### Annotation process
The smmaries are professionally written by the author of the articles.
#### Who are the annotators?
The authors of the artiles.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was initially created by Antoine J.-P. Tixier.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
### Contributions
Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset. | rlasseri/test-OrangeSum-small | [
"task_categories:summarization",
"task_ids:news-articles-headline-generation",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fr",
"license:unknown",
"arxiv:2010.12321",
"region:us"
]
| 2022-12-08T11:33:24+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["fr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-headline-generation", "news-articles-summarization"], "paperswithcode_id": "orangesum", "pretty_name": "OrangeSum", "dataset_info": [{"config_name": "abstract", "features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 53531651, "num_examples": 21401}, {"name": "test", "num_bytes": 3785207, "num_examples": 1500}, {"name": "validation", "num_bytes": 3698650, "num_examples": 1500}], "download_size": 23058350, "dataset_size": 61015508}, {"config_name": "title", "features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 65225136, "num_examples": 30659}, {"name": "test", "num_bytes": 3176690, "num_examples": 1500}, {"name": "validation", "num_bytes": 3276713, "num_examples": 1500}], "download_size": 27321627, "dataset_size": 71678539}]} | 2022-12-08T13:15:18+00:00 |
212fdac8fad8ea663ddc3f110fcfc3734e2b5983 |
# Dataset Card for AQSOL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The AQSOL dataset comes "from the Benchmarking Graph Neural Networks paper based on AqSolDB, a standardized database of 9,982 molecular graphs with their aqueous solubility values, collected from 9 different data sources" (PyGeometric doc).
### Supported Tasks and Leaderboards
`AQSOL` should be used for graph regression, on aqueous solubility.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 9,833 |
| average #nodes | 17.6 |
| average #edges | 35.8 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | graphs-datasets/AQSOL | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
]
| 2022-12-08T11:54:55+00:00 | {"license": "mit", "task_categories": ["graph-ml"]} | 2023-02-07T16:36:58+00:00 |
d6affd3ff35c09dc378e32070ca77db32783bb19 | vekkt/french_CEFR | [
"license:mit",
"region:us"
]
| 2022-12-08T12:57:46+00:00 | {"license": "mit"} | 2022-12-13T12:36:43+00:00 |
|
c9d5f8777f487ccd1e1fdddb73c2b325ccdc1844 |
# wikipedia persons masked: A filtered version of the wikipedia dataset, with only pages of people
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Contains ~70k pages from wikipedia, each describing a person. For each page, the person described in the text
is masked with a <mask> token. The ground truth for every mask is provided.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask, but can also be used for other tasks such as question answering,
e.g. "Who is <mask>?"
### Languages
*english only*
## Dataset Structure
There is one large dataset file (dataset.jsonl.xz), containing all data.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/wikipedia-persons-masked')
```
### Data Fields
Columns are:
- id: the id in the original dataset
- url: the link to the wikipedia page
- title: the title of the wikipedia page
- text: the original wikipedia text
- sentences: text split to sentences
- paraphrased_sentences: text split to sentences, with each sentence paraphrased (e.g. mutated a bit)
- masked_text_original: original text with entity masked in every occurence (
- masked_entities_original: array of entities masked in masked_text_original
- masked_text_paraphrased: paraphrased text with entity masked in every occurence
- masked_entities_paraphrased: array of entities msked in masked_text_paraphrased
### Data Splits
There are no splits.
## Dataset Creation
This dataset was created by using the wikipedia dataset from huggingface and processing it from there.
People were queried via wikidata. The texts were split with nltk punkt, paraphrased with tuner007's pegasus.
The entity recognition was performed with bert-base-NER by dslim and recognized entities replaced with a mask token.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@skatinger](https://github.com/skatinger) for adding this dataset. | rcds/wikipedia-persons-masked | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-08T13:51:30+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask"], "pretty_name": "wikipedia persons masked: A filtered version of the wikipedia dataset, with only pages of people."} | 2022-12-14T08:19:17+00:00 |
bbf3882ecc8c605982412305a89aeba241f94be3 | Gr3en/Mroue | [
"task_categories:text-to-image",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:artistic-2.0",
"region:us"
]
| 2022-12-08T14:40:46+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["artistic-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "A dataset made out of photo of a performance by Rabih Mroue", "tags": []} | 2022-12-08T14:46:37+00:00 |
|
f5ada01a737a001acfd5fb2e993e05499e1b0426 | Filterings on top of near-dedup + line filtering:
* Comments filtering (at least 1% of the number of lines should be comments/docstrings)
* Stars filtering (minimum of 5 stars) (on top of near-dedup + line filtering)
|Language |Before filtering | Stars | Comments ratio| More near-dedup| Tokenizer fertility|
|-------|--------|-------|--------|--------|--------|
|Python | 75.61 GB| 26.56 GB |65.64 GB | 61.97 GB | 72.52 GB |
|Java | 110 GB |35.83 GB |92.7 GB | 88.42 GB | 105.47 GB |
|Javascript | 82.7 GB |20.76 GB |57.5 GB| 65.09 GB | 76.37 GB |
| loubnabnl/data-filtering-statistics | [
"region:us"
]
| 2022-12-08T15:01:09+00:00 | {} | 2022-12-14T13:32:36+00:00 |
3a65d415bb91f6bf01d51ed9beaa5633c5dbcd67 | DonJoseValle/image | [
"region:us"
]
| 2022-12-08T15:20:20+00:00 | {} | 2022-12-08T15:20:56+00:00 |
|
54e07e370ffb19c1b5eebf13055e8b6af33991ad | HuggingFaceM4/SNLI-VE | [
"license:bsd-3-clause",
"region:us"
]
| 2022-12-08T15:21:47+00:00 | {"license": "bsd-3-clause"} | 2022-12-14T17:20:26+00:00 |
|
32c834ecedf939da775101c4550e197e650a29ec | Lubub/Bob | [
"license:apache-2.0",
"region:us"
]
| 2022-12-08T15:45:33+00:00 | {"license": "apache-2.0"} | 2022-12-08T16:17:37+00:00 |
|
911fc6def91ac58a3c3e402fae1e2fd36c4dba13 | # Dataset Card for "olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-perplexity-filters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295-perplexity-filters | [
"region:us"
]
| 2022-12-08T16:05:42+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "crawl_timestamp", "dtype": "float64"}, {"name": "kenlm_ppl", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 33197245533.0, "num_examples": 14558171}], "download_size": 20748879886, "dataset_size": 33197245533.0}} | 2022-12-08T22:11:48+00:00 |
fa86c72a1a9dd4dd334d9893dcd23dba444990ef |
# ULCA ASR Dataset Malayalam Speech Corpus
The labelled Malayalam speech subcorpus from the larger [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus).
The speech is taken from news broadcasts, and is largely composed of short soundbites with some longer outliers.
## Dataset Description
- **Repository:** [https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus) | thennal/ulca_ml | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:ml",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-08T16:51:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ml"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ULCA ASR Dataset Malayalam Speech Corpus", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 1117843608.106, "num_examples": 8614}], "download_size": 321100264, "dataset_size": 1117843608.106}, "tags": []} | 2022-12-08T17:15:07+00:00 |
dc256e0b81da9a895062265a444ce2b62c14b954 | Doganaws/dam1r | [
"license:openrail",
"region:us"
]
| 2022-12-08T16:52:20+00:00 | {"license": "openrail"} | 2022-12-08T16:52:47+00:00 |
|
61210a16d010f73fd531393567f4bb467c10f4d0 | # Dataset Card for NoCaps
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nocaps.org/](https://nocaps.org/)
- **Paper:** [nocaps: novel object captioning at scale](https://openaccess.thecvf.com/content_ICCV_2019/papers/Agrawal_nocaps_novel_object_captioning_at_scale_ICCV_2019_paper.pdf)
- **Leaderboard:**
- **Point of Contact:**: [email protected]
### Dataset Summary
Dubbed NoCaps for novel object captioning at scale, NoCaps consists of 166,100 human-generated captions describing 15,100 images from the Open Images validation and test sets.
The associated training data consists of COCO image-caption pairs, plus Open Images image-level labels and object bounding boxes.
Since Open Images contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Each instance has the following structure:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=732x1024 at 0x7F574A3A9B50>,
'image_coco_url': 'https://s3.amazonaws.com/nocaps/val/0013ea2087020901.jpg',
'image_date_captured': '2018-11-06 11:04:33',
'image_file_name': '0013ea2087020901.jpg',
'image_height': 1024,
'image_width': 732,
'image_id': 0,
'image_license': 0,
'image_open_images_id': '0013ea2087020901',
'annotations_ids': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
'annotations_captions': [
'A baby is standing in front of a house.',
'A little girl in a white jacket and sandals.',
'A young child stands in front of a house.',
'A child is wearing a white shirt and standing on a side walk. ',
'A little boy is standing in his diaper with a white shirt on.',
'A child wearing a diaper and shoes stands on the sidewalk.',
'A child is wearing a light-colored shirt during the daytime.',
'A little kid standing on the pavement in a shirt. ',
'Black and white photo of a little girl smiling.',
'a cute baby is standing alone with white shirt'
]
}
```
### Data Fields
- `image`: The image
- `image_coco_url`: URL for the image
- `image_date_captured`: Date at which the image was captured
- `image_file_name`: The file name for the image
- `image_height`: Height of the image
- `image_width`: Width of the image
- `image_id`: Id of the image
- `image_license`: Not sure what this is, it is always at 0
- `image_open_images_id`: Open image id
- `annotations_ids`: Unique ids for the captions (to use in conjunction with `annotations_captions`)
- `annotations_captions`: Captions for the image (to use in conjunction with `annotations_ids`)
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. | HuggingFaceM4/NoCaps | [
"license:cc-by-2.0",
"region:us"
]
| 2022-12-08T17:11:21+00:00 | {"license": "cc-by-2.0"} | 2022-12-14T04:08:38+00:00 |
305b664c82532d6b8c8d4d57d5896299ae6df0a1 | nev/minedojo-clips | [
"license:isc",
"region:us"
]
| 2022-12-08T18:12:15+00:00 | {"license": "isc"} | 2022-12-08T18:14:58+00:00 |
|
eac3151081115101714032dbb9ef4d577b2d84b2 | # AutoTrain Dataset for project: test_row2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test_row2.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<316x316 RGB PIL image>",
"target": 1
},
{
"image": "<316x316 RGB PIL image>",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=5, names=['animals', 'dance', 'food', 'sport', 'tech'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 392 |
| valid | 101 |
| Efimov6886/autotrain-data-test_row2 | [
"task_categories:image-classification",
"region:us"
]
| 2022-12-08T18:38:32+00:00 | {"task_categories": ["image-classification"]} | 2022-12-08T18:39:45+00:00 |
09be8c5bbc57cb3887f3a9732ad6aa7ec602a1fa |
# Dataset Card for HH-RLHF
## Dataset Summary
This repository provides access to two different kinds of data:
1. Human preference data about helpfulness and harmlessness from [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). These data are meant to train preference (or reward) models for subsequent RLHF training. These data are *not* meant for supervised training of dialogue agents. Training dialogue agents on these data is likely to lead to harmful models and this shold be avoided.
2. Human-generated and annotated red teaming dialogues from [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). These data are meant to understand how crowdworkers red team models and what types of red team attacks are succesful or not. The data are *not* meant for fine-tuning or preference modeling (use the data above for preference modeling). These data are entire transcripts of conversations that are derived from the harmlessness preference modeling data described above, where only the chosen response is incorporated into the overall transcript. Furthermore, the transcripts are annotated with human and automated measurements of how harmful the overall dialogues are.
**Disclaimer**: The data (especially the harmlessness preference data and the red team data) contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models *less* harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. As mentioned above, these data are *not* intended for training dialogue agents as this will likely lead to harmful model behavior.
Each of these datasets are described further below.
## Human preference data about helpfulness and harmlessness (PM Data)
The data are described in the paper: [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862). If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected".
For **helpfulness**, the data are grouped into train/test splits in three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process.
For **harmlessness**, the data are only collected for our base models, but otherwise formatted in the same way.
Details about the data collection process and crowdworker population can be found in the paper, specifically in section 2 and appendix D.
## Red teaming data (not PM Data)
The data are described in the paper: [Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned](https://www.anthropic.com/red_teaming.pdf). If you find the data useful, please cite the paper. At a high level these data can be examined to understand what succesful red team attempts look like.
Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper.
Each line of the jsonl file contains a dictionary with the following fields:
- `transcript` a text transcript of a conversation between a human adversary (red team member) and an AI assistant
- `min_harmlessness_score_transcript` a real value score of the harmlessness of the AI assistant (lower is more harmful) as obtained from a preference model
- `num_params` number of parameters in the language model powering the AI assistant
- `model_type` type of model powering the AI assistant
- `rating` the red team member's rating of how successful they were at breaking the AI assistant (Likert scale, higher is more successful)
- `task_description` a short text description written by the red team member about how they tried to red team the AI assistant
- `task_description_harmlessness_score` a real value score of the harmlessness of the task description (lower is more harmful) as obtained from a preference model
- `red_team_member_id` an arbitrary identifier of the red team member. one red team member can generate multiple red team attacks
- `is_upworker` a binary indicator that is true if the red team member was from the crowd platform Upwork or false if they were from MTurk
- `tags` a list of up to 6 tags per transcript. tags are short descriptions of the red team attempts generated by crowdworkers who reviewed red team data post-hoc. tags were only provided for a random sample of 1000 red team attempts for two of four model types.
## Usage
Each of the above datasets is located in a separate sub-directory. To load an individual subset, use the `data_dir` argument of the `load_dataset()` function as follows:
```python
from datasets import load_dataset
# Load all helpfulness/harmless subsets (share the same schema)
dataset = load_dataset("Anthropic/hh-rlhf")
# Load one of the harmless subsets
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")
# Load the red teaming subset
dataset = load_dataset("Anthropic/hh-rlhf", data_dir="red-team-attempts")
```
## Contact
The original authors host this dataset on GitHub here: https://github.com/anthropics/hh-rlhf
You can submit inquiries to: [email protected] | Anthropic/hh-rlhf | [
"license:mit",
"human-feedback",
"arxiv:2204.05862",
"region:us"
]
| 2022-12-08T20:11:33+00:00 | {"license": "mit", "tags": ["human-feedback"]} | 2023-05-26T17:47:34+00:00 |
12cef1374a774d20ecbd85ed243cebda22699995 | awacke1/AIAssessmentPanelsAndForms | [
"license:mit",
"region:us"
]
| 2022-12-08T20:12:37+00:00 | {"license": "mit"} | 2022-12-08T20:13:46+00:00 |
|
5afed41d76630d835d06ba58a290693027b2e604 | tilos/cantonese_daily | [
"license:cc-by-nc-nd-4.0",
"region:us"
]
| 2022-12-08T21:12:33+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2022-12-08T22:36:23+00:00 |
|
5f0b123d95c1f7ec984f37c735cfbe6fb3b3ed32 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774734 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-ilpost", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:37:23+00:00 |
f97781f2633909095e80f70c3915863503cfb29c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-mlsum
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774735 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-mlsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:37:29+00:00 |
2237fe0b5a22dc59cc2fab497c2e23c5974d6f90 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: IlyaGusev/rut5_base_headline_gen_telegram
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774736 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "IlyaGusev/rut5_base_headline_gen_telegram", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:39:06+00:00 |
3fc3fbb59d389efdd0cdba1d857723827bb714ee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: NYTK/summarization-hi-bart-base-1024-hungarian
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774737 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "NYTK/summarization-hi-bart-base-1024-hungarian", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:37:11+00:00 |
3a983a44766c2e8f54fad6e0d26ef56dcc1a2145 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: RUCAIBox/mtl-summarization
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774738 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "RUCAIBox/mtl-summarization", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:38:04+00:00 |
eb75c19b4ad982e530ec9d5ab748c22ff723a9a0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ahmeddbahaa/xlmroberta-finetune-en-cnn
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774739 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "ahmeddbahaa/xlmroberta-finetune-en-cnn", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:38:43+00:00 |
709728f1e88a0b4ed44744d7a1713738d69de8a9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: YtBig/subtitle-v2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-417ba9-2386774740 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-08T21:36:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "YtBig/subtitle-v2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-08T21:38:22+00:00 |
97c5aa6f8d88245a473868f8d9b7740fd88b1e6c | # Dataset Card for "dataset-dalle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/jobs-dalle-2 | [
"region:us"
]
| 2022-12-08T21:42:37+00:00 | {"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 24806949244.5, "num_examples": 31500}], "download_size": 18481427451, "dataset_size": 24806949244.5}} | 2022-12-09T07:49:38+00:00 |
d5517d7703c33f70e9e621e571a50899e5c0d0fe |
This dataset was created from Mozilla's Common Voice dataset for the purposes of Multilingual ASR on Kinyarwanda and English.
The dataset contains 3000 hours of multilingual training samples, 300 hours of validation samples and 200 of testing samples.
| molamin/Kinyarwanda_Engligh_Multilingual_ASR | [
"size_categories:700K<n<800K",
"size_categories:~3120 hours",
"language:rw",
"language:en",
"license:cc-by-4.0",
"region:us"
]
| 2022-12-08T21:59:16+00:00 | {"language": ["rw", "en"], "license": ["cc-by-4.0"], "size_categories": ["700K<n<800K", "~3120 hours"]} | 2022-12-09T07:39:29+00:00 |
36a0ef1274ca64b80ab38b59f93608037b4c6390 | # Dataset Card for "cantonese_processed_daily"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tilos/cantonese_processed_daily | [
"region:us"
]
| 2022-12-08T22:40:55+00:00 | {"dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": "float32"}}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 3899650216, "num_examples": 4060}], "download_size": 623179139, "dataset_size": 3899650216}} | 2022-12-08T22:41:45+00:00 |
480186ac8cde3997a74b6395590420b7ec4ee267 | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/joojs/fairface](https://github.com/joojs/fairface)
- **Repository:** [https://github.com/joojs/fairface](https://github.com/joojs/fairface)
- **Paper:** [https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
FairFace is a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino.
Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Each instance has the following structure:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=448x448 at 0x7FCABA221FA0>,
'age': 6,
'gender': 0,
'race': 0,
'service_test': True
}
```
### Data Fields
- `image`: The image
- `age`: Age class among `["0-2", "3-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "more than 70"]`
- `gender`: Gender class among `["Male", "Female"]`
- `race`: Race class among `["East Asian", "Indian", "Black", "White", "Middle Eastern", "Latino_Hispanic", "Southeast Asian"]`
- `service_test`: Not sure what this is. See [issue](https://github.com/joojs/fairface/issues/9).
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
| HuggingFaceM4/FairFace | [
"license:cc-by-4.0",
"region:us"
]
| 2022-12-08T23:00:45+00:00 | {"license": "cc-by-4.0"} | 2022-12-09T00:14:46+00:00 |
d2a0cc5509d56204238d34969b0b3657f8d3bf32 | Checks with https://visualqa.org/download.html:
- Num train questions: 443,757
- Num val questions: 214,354
- Num test questions: 447,793
- Num train answers: 4,437,570
- Num val answers: 2,143,540
- Num train images: 82,783
- Num val images: 40,504
- Num test images: 81,434
testdev is not mentionned:
- Num questions: 107,394
- Num images: 36,807 | HuggingFaceM4/clevr | [
"region:us"
]
| 2022-12-09T00:26:22+00:00 | {} | 2022-12-10T19:56:30+00:00 |
f4872d73a965adbb833f4330e8c20ea5a3588d24 | # Dataset Card for "tattoo_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Drozdik/tattoo_v1 | [
"region:us"
]
| 2022-12-09T00:36:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101532798.169, "num_examples": 4239}], "download_size": 78733652, "dataset_size": 101532798.169}} | 2022-12-09T00:39:03+00:00 |
97c32c4fc28136f82b442d9b444c479478741fb3 | # Dataset Card for "tattoo_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Drozdik/tattoo_v3 | [
"region:us"
]
| 2022-12-09T00:57:27+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101626056.169, "num_examples": 4239}], "download_size": 78738858, "dataset_size": 101626056.169}} | 2022-12-09T01:00:20+00:00 |
70cdb6a63172ab8d0510eec4812fe5fe77b81c2a | zmao/food_img_caption | [
"license:other",
"region:us"
]
| 2022-12-09T01:12:12+00:00 | {"license": "other", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 10312.0, "num_examples": 1}], "download_size": 11215, "dataset_size": 10312.0}} | 2022-12-09T14:10:48+00:00 |
|
dbfbc58b8cc454e82ed36fc4c7a0b0140e0f5aae | # Dataset Card for "news_corpus_v2_p1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hieule/news_corpus_v2_p1 | [
"region:us"
]
| 2022-12-09T03:47:17+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "sapo", "dtype": "string"}, {"name": "cates", "sequence": "string"}, {"name": "publish", "dtype": "timestamp[us]"}, {"name": "text_content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15876374992, "num_examples": 5000000}], "download_size": 7858134654, "dataset_size": 15876374992}} | 2022-12-09T05:54:12+00:00 |
5c7efb37c428b1486cfd6ffaa204f32cd73200dd | akpotluri/NLPFinalProject | [
"license:openrail",
"region:us"
]
| 2022-12-09T04:18:48+00:00 | {"license": "openrail"} | 2022-12-09T04:18:48+00:00 |
|
be6b2afd76cc5196705f419b0cb91a2b583c3155 | # Dataset Card for TwitterDEILA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
ID
Sentiment
Content
### Data Splits
train
validate
validatedeila
validateonetest
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | deancgarcia/TwitterDEILA | [
"region:us"
]
| 2022-12-09T04:48:33+00:00 | {} | 2022-12-09T23:43:06+00:00 |
4da5bdd1570c7c74fb2f570a29bd515222f9d75d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: stacked-summaries/flan-t5-large-stacked-samsum-1024
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-eval-samsum-samsum-29813b-2390574811 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-09T04:50:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "stacked-summaries/flan-t5-large-stacked-samsum-1024", "metrics": ["bertscore"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-12-09T04:52:51+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.