sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
90fd10a63414a89bbeda54d06c12f81662ee21b7
# Dataset Card for `tripclick/train/head` The `tripclick/train/head` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/head). # Data This dataset provides: - `queries` (i.e., topics); count=3,529 - `qrels`: (relevance assessments); count=116,821 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) This dataset is used by: [`tripclick_train_head_dctr`](https://huggingface.co/datasets/irds/tripclick_train_head_dctr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_head', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_head', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_head
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:18+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/head`", "viewer": false}
2023-01-05T03:54:24+00:00
617dd3e00e3d470a1cf5b039115007ff6fa32efe
# Dataset Card for `tripclick/train/head/dctr` The `tripclick/train/head/dctr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/head/dctr). # Data This dataset provides: - `qrels`: (relevance assessments); count=128,420 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) - For `queries`, use [`irds/tripclick_train_head`](https://huggingface.co/datasets/irds/tripclick_train_head) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/tripclick_train_head_dctr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_head_dctr
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "source_datasets:irds/tripclick_train_head", "region:us" ]
2023-01-05T03:54:29+00:00
{"source_datasets": ["irds/tripclick", "irds/tripclick_train_head"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/head/dctr`", "viewer": false}
2023-01-05T03:54:35+00:00
4a060987d00ec62f161e3dd58642b8284568816b
# Dataset Card for `tripclick/train/hofstaetter-triples` The `tripclick/train/hofstaetter-triples` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/hofstaetter-triples). # Data This dataset provides: - `docpairs`; count=10,000,000 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) - For `queries`, use [`irds/tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train) - For `qrels`, use [`irds/tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train) ## Usage ```python from datasets import load_dataset docpairs = load_dataset('irds/tripclick_train_hofstaetter-triples', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } @inproceedings{Hofstaetter2022TripClick, title={Establishing Strong Baselines for TripClick Health Retrieval}, author={Sebastian Hofst\"atter and Sophia Althammer and Mete Sertkan and Allan Hanbury}, year={2022}, booktitle={ECIR} } ```
irds/tripclick_train_hofstaetter-triples
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "source_datasets:irds/tripclick_train", "region:us" ]
2023-01-05T03:54:40+00:00
{"source_datasets": ["irds/tripclick", "irds/tripclick_train"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/hofstaetter-triples`", "viewer": false}
2023-01-05T03:54:46+00:00
8222d24a4b4a6a5d74ae00d4c9f4d8a58b4f5c91
# Dataset Card for `tripclick/train/tail` The `tripclick/train/tail` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/tail). # Data This dataset provides: - `queries` (i.e., topics); count=576,156 - `qrels`: (relevance assessments); count=1,621,493 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_tail', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_tail', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_tail
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:52+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/tail`", "viewer": false}
2023-01-05T03:54:57+00:00
a931b674056470cf3a953e42f384964c22463485
# Dataset Card for `tripclick/train/torso` The `tripclick/train/torso` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train/torso). # Data This dataset provides: - `queries` (i.e., topics); count=105,964 - `qrels`: (relevance assessments); count=966,898 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train_torso', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train_torso', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train_torso
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:55:03+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train/torso`", "viewer": false}
2023-01-05T03:55:09+00:00
961409b313ecb3ddcb1ea66c346b856a311f69f0
# Dataset Card for `tripclick/val/head/dctr` The `tripclick/val/head/dctr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/val/head/dctr). # Data This dataset provides: - `qrels`: (relevance assessments); count=66,812 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/tripclick_val_head_dctr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_val_head_dctr
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:55:14+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/val/head/dctr`", "viewer": false}
2023-01-05T03:55:20+00:00
d6143877d65dbca6cc33910e64833b75d3595239
# Dataset Card for `tweets2013-ia` The `tweets2013-ia` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=252,713,133 This dataset is used by: [`tweets2013-ia_trec-mb-2013`](https://huggingface.co/datasets/irds/tweets2013-ia_trec-mb-2013), [`tweets2013-ia_trec-mb-2014`](https://huggingface.co/datasets/irds/tweets2013-ia_trec-mb-2014) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/tweets2013-ia', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'user_id': ..., 'created_at': ..., 'lang': ..., 'reply_doc_id': ..., 'retweet_doc_id': ..., 'source': ..., 'source_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:55:25+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia`", "viewer": false}
2023-01-05T03:55:31+00:00
52fae0a6d6a507bbd3bf208c6383664a03e94f11
# Dataset Card for `tweets2013-ia/trec-mb-2013` The `tweets2013-ia/trec-mb-2013` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia/trec-mb-2013). # Data This dataset provides: - `queries` (i.e., topics); count=60 - `qrels`: (relevance assessments); count=71,279 - For `docs`, use [`irds/tweets2013-ia`](https://huggingface.co/datasets/irds/tweets2013-ia) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tweets2013-ia_trec-mb-2013', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'time': ..., 'tweet_time': ...} qrels = load_dataset('irds/tweets2013-ia_trec-mb-2013', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Lin2013Microblog, title={Overview of the TREC-2013 Microblog Track}, author={Jimmy Lin and Miles Efron}, booktitle={TREC}, year={2013} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia_trec-mb-2013
[ "task_categories:text-retrieval", "source_datasets:irds/tweets2013-ia", "region:us" ]
2023-01-05T03:55:36+00:00
{"source_datasets": ["irds/tweets2013-ia"], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia/trec-mb-2013`", "viewer": false}
2023-01-05T03:55:42+00:00
a719630ac5bc91884254e45ef2be22a470c926ca
# Dataset Card for `tweets2013-ia/trec-mb-2014` The `tweets2013-ia/trec-mb-2014` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tweets2013-ia#tweets2013-ia/trec-mb-2014). # Data This dataset provides: - `queries` (i.e., topics); count=55 - `qrels`: (relevance assessments); count=57,985 - For `docs`, use [`irds/tweets2013-ia`](https://huggingface.co/datasets/irds/tweets2013-ia) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tweets2013-ia_trec-mb-2014', 'queries') for record in queries: record # {'query_id': ..., 'query': ..., 'time': ..., 'tweet_time': ..., 'description': ...} qrels = load_dataset('irds/tweets2013-ia_trec-mb-2014', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Lin2014Microblog, title={Overview of the TREC-2014 Microblog Track}, author={Jimmy Lin and Miles Efron and Yulu Wang and Garrick Sherman}, booktitle={TREC}, year={2014} } @inproceedings{Sequiera2017TweetsIA, title={Finally, a Downloadable Test Collection of Tweets}, author={Royal Sequiera and Jimmy Lin}, booktitle={SIGIR}, year={2017} } ```
irds/tweets2013-ia_trec-mb-2014
[ "task_categories:text-retrieval", "source_datasets:irds/tweets2013-ia", "region:us" ]
2023-01-05T03:55:47+00:00
{"source_datasets": ["irds/tweets2013-ia"], "task_categories": ["text-retrieval"], "pretty_name": "`tweets2013-ia/trec-mb-2014`", "viewer": false}
2023-01-05T03:55:53+00:00
a5017ebcec57535ec8b4750eb0360183e3f7edc4
# Dataset Card for `vaswani` The `vaswani` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/vaswani#vaswani). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=11,429 - `queries` (i.e., topics); count=93 - `qrels`: (relevance assessments); count=2,083 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/vaswani', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/vaswani', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/vaswani', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/vaswani
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:55:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`vaswani`", "viewer": false}
2023-01-05T03:56:04+00:00
c432fd8721459a09be9a0f8c30a275801dbd8ce6
# Dataset Card for `wapo/v2/trec-core-2018` The `wapo/v2/trec-core-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-core-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=26,233 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-core-2018', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/wapo_v2_trec-core-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/wapo_v2_trec-core-2018
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-core-2018`", "viewer": false}
2023-01-05T03:56:15+00:00
9ab6811f8d738ac30f4befaf1249297be2cbf4a6
# Dataset Card for `wapo/v2/trec-news-2018` The `wapo/v2/trec-news-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=8,508 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-news-2018', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v2_trec-news-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Soboroff2018News, title={TREC 2018 News Track Overview}, author={Ian Soboroff and Shudong Huang and Donna Harman}, booktitle={TREC}, year={2018} } ```
irds/wapo_v2_trec-news-2018
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-news-2018`", "viewer": false}
2023-01-05T03:56:26+00:00
247513df75b08e9d1918dc59823f26b8d3365e6e
# Dataset Card for `wapo/v2/trec-news-2019` The `wapo/v2/trec-news-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2019). # Data This dataset provides: - `queries` (i.e., topics); count=60 - `qrels`: (relevance assessments); count=15,655 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v2_trec-news-2019', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v2_trec-news-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Soboroff2019News, title={TREC 2019 News Track Overview}, author={Ian Soboroff and Shudong Huang and Donna Harman}, booktitle={TREC}, year={2019} } ```
irds/wapo_v2_trec-news-2019
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v2/trec-news-2019`", "viewer": false}
2023-01-05T03:56:38+00:00
4a56eae66bba29137d43811583a2a9fea9be4b80
# Dataset Card for `wapo/v3/trec-news-2020` The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=17,764 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries') for record in queries: record # {'query_id': ..., 'doc_id': ..., 'url': ...} qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/wapo_v3_trec-news-2020
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wapo/v3/trec-news-2020`", "viewer": false}
2023-01-05T03:56:49+00:00
2e5d9727052ef3595077faa290e3134bd63d105f
# Dataset Card for `wikiclir/ar` The `wikiclir/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ar). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=535,118 - `queries` (i.e., topics); count=324,489 - `qrels`: (relevance assessments); count=519,269 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ar', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ar', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ar', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ar
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:56:54+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ar`", "viewer": false}
2023-01-05T03:57:00+00:00
a4092e5d19aaef68b4980fa9f2b97b9878c2960c
# Dataset Card for `wikiclir/ca` The `wikiclir/ca` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ca). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=548,722 - `queries` (i.e., topics); count=339,586 - `qrels`: (relevance assessments); count=965,233 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ca', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ca', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ca', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ca
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:05+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ca`", "viewer": false}
2023-01-05T03:57:11+00:00
ac55b3ea8a426c0adfb9bae3681814b434a38cb0
# Dataset Card for `wikiclir/cs` The `wikiclir/cs` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/cs). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=386,906 - `queries` (i.e., topics); count=233,553 - `qrels`: (relevance assessments); count=954,370 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_cs', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_cs', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_cs', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_cs
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/cs`", "viewer": false}
2023-01-05T03:57:22+00:00
524d177f6651189d1ece8f4afcd1d00726b583cb
# Dataset Card for `wikiclir/de` The `wikiclir/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/de). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,091,278 - `queries` (i.e., topics); count=938,217 - `qrels`: (relevance assessments); count=5,550,454 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_de', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_de', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_de', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_de
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/de`", "viewer": false}
2023-01-05T03:57:33+00:00
14d4021f04cb9800c434724403fb1a33b4f14f15
# Dataset Card for `wikiclir/en-simple` The `wikiclir/en-simple` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/en-simple). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=127,089 - `queries` (i.e., topics); count=114,572 - `qrels`: (relevance assessments); count=250,380 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_en-simple', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_en-simple', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_en-simple', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_en-simple
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/en-simple`", "viewer": false}
2023-01-05T03:57:44+00:00
79e3fb27a5b5996e616373917494eacfd1dd0ddf
# Dataset Card for `wikiclir/es` The `wikiclir/es` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/es). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,302,958 - `queries` (i.e., topics); count=781,642 - `qrels`: (relevance assessments); count=2,894,807 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_es', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_es', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_es', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_es
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:57:50+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/es`", "viewer": false}
2023-01-05T03:57:55+00:00
2fbaf54cd79254c4642cdba2a323ea8387e37114
# Dataset Card for `wikiclir/fi` The `wikiclir/fi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fi). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=418,677 - `queries` (i.e., topics); count=273,819 - `qrels`: (relevance assessments); count=939,613 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_fi', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_fi', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_fi', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_fi
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:01+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/fi`", "viewer": false}
2023-01-05T03:58:07+00:00
77d187f804572db733df8e9e9618f378e6a25391
# Dataset Card for `wikiclir/fr` The `wikiclir/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,894,397 - `queries` (i.e., topics); count=1,089,179 - `qrels`: (relevance assessments); count=5,137,366 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_fr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_fr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_fr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_fr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:12+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/fr`", "viewer": false}
2023-01-05T03:58:18+00:00
9f5309c8bc74eee2860535108f3d5da8a7d5ba56
# Dataset Card for `wikiclir/it` The `wikiclir/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/it). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,347,011 - `queries` (i.e., topics); count=808,605 - `qrels`: (relevance assessments); count=3,443,633 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_it', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_it', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_it', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_it
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:23+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/it`", "viewer": false}
2023-01-05T03:58:29+00:00
72cc55ced1a6c79bd848e6d99959983d47b61f4e
# Dataset Card for `wikiclir/ja` The `wikiclir/ja` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ja). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,071,292 - `queries` (i.e., topics); count=426,431 - `qrels`: (relevance assessments); count=3,338,667 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ja', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ja', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ja', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ja
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:34+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ja`", "viewer": false}
2023-01-05T03:58:40+00:00
9304730a276f792ee322e52c4b9c9e8bb92ec4f7
# Dataset Card for `wikiclir/ko` The `wikiclir/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ko). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=394,177 - `queries` (i.e., topics); count=224,855 - `qrels`: (relevance assessments); count=568,205 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ko', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ko', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ko', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ko
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:45+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ko`", "viewer": false}
2023-01-05T03:58:51+00:00
98a3e584c1719f9af231bfdeb824f939af233ca0
# Dataset Card for `wikiclir/nl` The `wikiclir/nl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,908,260 - `queries` (i.e., topics); count=687,718 - `qrels`: (relevance assessments); count=2,334,644 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_nl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_nl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_nl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_nl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:58:57+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/nl`", "viewer": false}
2023-01-05T03:59:02+00:00
07e5a78fa08080ff5f25f9da12ee3de92016732d
# Dataset Card for `wikiclir/nn` The `wikiclir/nn` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nn). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=133,290 - `queries` (i.e., topics); count=99,493 - `qrels`: (relevance assessments); count=250,141 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_nn', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_nn', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_nn', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_nn
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:08+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/nn`", "viewer": false}
2023-01-05T03:59:13+00:00
2a4a0225105760f3c2a5fb1bf0f74d5304144439
# Dataset Card for `wikiclir/no` The `wikiclir/no` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/no). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=471,420 - `queries` (i.e., topics); count=299,897 - `qrels`: (relevance assessments); count=963,514 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_no', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_no', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_no', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_no
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:19+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/no`", "viewer": false}
2023-01-05T03:59:24+00:00
a60c4b3201953e6f73404da4f8e257a35dbb3e51
# Dataset Card for `wikiclir/pl` The `wikiclir/pl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,234,316 - `queries` (i.e., topics); count=693,656 - `qrels`: (relevance assessments); count=2,471,360 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_pl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_pl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_pl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_pl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/pl`", "viewer": false}
2023-01-05T03:59:35+00:00
b7a9c26c16d319a170ffaf4278831c91d95a3f80
# Dataset Card for `wikiclir/pt` The `wikiclir/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=973,057 - `queries` (i.e., topics); count=611,732 - `qrels`: (relevance assessments); count=1,741,889 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_pt', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_pt', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_pt', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_pt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:41+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/pt`", "viewer": false}
2023-01-05T03:59:48+00:00
0cd574b3cd70d2c3de60d7cd6b57318cd8d12894
# Dataset Card for `wikiclir/ro` The `wikiclir/ro` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ro). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=376,655 - `queries` (i.e., topics); count=199,264 - `qrels`: (relevance assessments); count=451,180 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ro', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ro', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ro', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ro
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:59:53+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ro`", "viewer": false}
2023-01-05T03:59:59+00:00
794af9f06463ebe99d484d1fddc4851cff4eb143
# Dataset Card for `wikiclir/ru` The `wikiclir/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,413,945 - `queries` (i.e., topics); count=664,924 - `qrels`: (relevance assessments); count=2,321,384 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_ru', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_ru', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:04+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/ru`", "viewer": false}
2023-01-05T04:00:10+00:00
eb942ebdcde3f00e8ab2625f12fea6e763d89c5e
# Dataset Card for `wikiclir/sv` The `wikiclir/sv` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sv). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,785,412 - `queries` (i.e., topics); count=639,073 - `qrels`: (relevance assessments); count=2,069,453 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_sv', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_sv', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_sv', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_sv
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/sv`", "viewer": false}
2023-01-05T04:00:21+00:00
8720841ac50187a4f2cb975a561071aa394adae5
# Dataset Card for `wikiclir/sw` The `wikiclir/sw` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sw). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=37,079 - `queries` (i.e., topics); count=22,860 - `qrels`: (relevance assessments); count=57,924 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_sw', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_sw', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_sw', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_sw
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:27+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/sw`", "viewer": false}
2023-01-05T04:00:32+00:00
4deb906ed0fcae7cbd355e37821a2b8c1aeeed76
# Dataset Card for `wikiclir/tl` The `wikiclir/tl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tl). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=79,008 - `queries` (i.e., topics); count=48,930 - `qrels`: (relevance assessments); count=72,359 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_tl', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_tl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_tl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_tl
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:38+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/tl`", "viewer": false}
2023-01-05T04:00:44+00:00
7aaa433bf0a1b2a46fbf49d62bcac06db34acd3b
# Dataset Card for `wikiclir/tr` The `wikiclir/tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=295,593 - `queries` (i.e., topics); count=185,388 - `qrels`: (relevance assessments); count=380,651 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_tr', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_tr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_tr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_tr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:00:49+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/tr`", "viewer": false}
2023-01-05T04:00:55+00:00
34241822fbdee548da53215dc7e9aa079fedfda6
# Dataset Card for `wikiclir/uk` The `wikiclir/uk` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/uk). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=704,903 - `queries` (i.e., topics); count=348,222 - `qrels`: (relevance assessments); count=913,358 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_uk', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_uk', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_uk', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_uk
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:00+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/uk`", "viewer": false}
2023-01-05T04:01:06+00:00
11c4efc0c905cf0f7e1ec205a0d106063117d401
# Dataset Card for `wikiclir/vi` The `wikiclir/vi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/vi). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,392,152 - `queries` (i.e., topics); count=354,312 - `qrels`: (relevance assessments); count=611,355 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_vi', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_vi', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_vi', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_vi
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:11+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/vi`", "viewer": false}
2023-01-05T04:01:17+00:00
da21f40f6e4b7c793a5a432c31c2cf4d7f1352ff
# Dataset Card for `wikiclir/zh` The `wikiclir/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=951,480 - `queries` (i.e., topics); count=463,273 - `qrels`: (relevance assessments); count=926,130 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikiclir_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ...} queries = load_dataset('irds/wikiclir_zh', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/wikiclir_zh', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{sasaki-etal-2018-cross, title = "Cross-Lingual Learning-to-Rank with Shared Representations", author = "Sasaki, Shota and Sun, Shuo and Schamoni, Shigehiko and Duh, Kevin and Inui, Kentaro", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N18-2073", doi = "10.18653/v1/N18-2073", pages = "458--463" } ```
irds/wikiclir_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:22+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikiclir/zh`", "viewer": false}
2023-01-05T04:01:28+00:00
34faf269615dfec84c7406e1aad4028184021690
# Dataset Card for `wikir/en1k` The `wikir/en1k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en1k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=369,721 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en1k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en1k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:33+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en1k`", "viewer": false}
2023-01-05T04:01:39+00:00
2d9fe6d78bd5fb74187947c42fb47aecb45e616d
# Dataset Card for `wikir/en59k` The `wikir/en59k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en59k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,454,785 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en59k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en59k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:44+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en59k`", "viewer": false}
2023-01-05T04:01:50+00:00
381392f54da22a76b14335afddb7a77ac3e9acf7
# Dataset Card for `wikir/en78k` The `wikir/en78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en78k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,456,637 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_en78k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_en78k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:01:56+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/en78k`", "viewer": false}
2023-01-05T04:02:01+00:00
4974b363522566225161cd1d4909c90f93f10324
# Dataset Card for `wikir/ens78k` The `wikir/ens78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/ens78k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,456,637 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_ens78k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_ens78k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:07+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/ens78k`", "viewer": false}
2023-01-05T04:02:13+00:00
db04698ad140684d348a6c4c200a105488cb2568
# Dataset Card for `wikir/es13k` The `wikir/es13k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/es13k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=645,901 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_es13k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_es13k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:18+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/es13k`", "viewer": false}
2023-01-05T04:02:24+00:00
4b30e81715f939304b635ab6d0d310c4206388f1
# Dataset Card for `wikir/fr14k` The `wikir/fr14k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/fr14k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=736,616 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_fr14k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_fr14k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:29+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/fr14k`", "viewer": false}
2023-01-05T04:02:35+00:00
b880a1e3756c68f85108b73e93cf1f121e858e5a
# Dataset Card for `wikir/it16k` The `wikir/it16k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/it16k). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=503,012 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/wikir_it16k', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Frej2020Wikir, title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={LREC}, year={2020} } @inproceedings{Frej2020MlWikir, title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More}, author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet}, booktitle={CIRCLE}, year={2020} } ```
irds/wikir_it16k
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:40+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`wikir/it16k`", "viewer": false}
2023-01-05T04:02:46+00:00
53e8015a522198555acd5c856d28dbe9bff6da9c
# Dataset Card for `trec-fair/2022/train` The `trec-fair/2022/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-fair#trec-fair/2022/train). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=2,088,306 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-fair_2022_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'url': ...} qrels = load_dataset('irds/trec-fair_2022_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/trec-fair_2022_train
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:02:51+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-fair/2022/train`", "viewer": false}
2023-01-05T04:02:57+00:00
25bbce2867adeb1406f7b6037dbe97caf78ec7cf
# Dataset Card for `trec-cast/v0` The `trec-cast/v0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v0). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=47,696,605 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-cast_v0', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2019Cast, title={CAsT 2019: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2019} } ```
irds/trec-cast_v0
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:03:03+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v0`", "viewer": false}
2023-01-05T04:03:08+00:00
64ac84bd0275f929e5007e75972bae185bbb7bfe
# Dataset Card for `trec-cast/v1` The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=38,622,444 This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-cast_v1', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2019Cast, title={CAsT 2019: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2019} } ```
irds/trec-cast_v1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:03:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1`", "viewer": false}
2023-01-05T04:03:19+00:00
c730a1fc68ec286a83e785b796c2ebb11b6e34d2
# Dataset Card for `trec-cast/v1/2020` The `trec-cast/v1/2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020). # Data This dataset provides: - `queries` (i.e., topics); count=216 - `qrels`: (relevance assessments); count=40,451 - For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1) This dataset is used by: [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-cast_v1_2020', 'queries') for record in queries: record # {'query_id': ..., 'raw_utterance': ..., 'automatic_rewritten_utterance': ..., 'manual_rewritten_utterance': ..., 'manual_canonical_result_id': ..., 'topic_number': ..., 'turn_number': ...} qrels = load_dataset('irds/trec-cast_v1_2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2020Cast, title={CAsT 2020: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2020} } ```
irds/trec-cast_v1_2020
[ "task_categories:text-retrieval", "source_datasets:irds/trec-cast_v1", "region:us" ]
2023-01-05T04:03:25+00:00
{"source_datasets": ["irds/trec-cast_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1/2020`", "viewer": false}
2023-01-05T04:03:31+00:00
a6b28f20219044c62f728352f3692897550a30cf
# Dataset Card for `trec-cast/v1/2020/judged` The `trec-cast/v1/2020/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020/judged). # Data This dataset provides: - `queries` (i.e., topics); count=208 - For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1) - For `qrels`, use [`irds/trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-cast_v1_2020_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dalton2020Cast, title={CAsT 2020: The Conversational Assistance Track Overview}, author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan}, booktitle={TREC}, year={2020} } ```
irds/trec-cast_v1_2020_judged
[ "task_categories:text-retrieval", "source_datasets:irds/trec-cast_v1", "source_datasets:irds/trec-cast_v1_2020", "region:us" ]
2023-01-05T04:03:36+00:00
{"source_datasets": ["irds/trec-cast_v1", "irds/trec-cast_v1_2020"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-cast/v1/2020/judged`", "viewer": false}
2023-01-05T04:03:42+00:00
762f2fe26d008e9908e0d451cbb1c3818c95a89f
# Dataset Card for `hc4/fa` The `hc4/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=486,486 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_fa', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_fa
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:03:47+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/fa`", "viewer": false}
2023-01-05T04:03:53+00:00
65fa23f6fb44e5965c09369b58c07ceeb3f169c5
# Dataset Card for `hc4/ru` The `hc4/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=4,721,064 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_ru
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:03:58+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/ru`", "viewer": false}
2023-01-05T04:04:04+00:00
cc98dd7e796bd3a70cea16b97ae0df671acbf444
# Dataset Card for `hc4/zh` The `hc4/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=646,305 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/hc4_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Lawrie2022HC4, author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang}, title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR}, booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, year = {2022}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, url = {https://arxiv.org/abs/2201.09992} } ```
irds/hc4_zh
[ "task_categories:text-retrieval", "arxiv:2201.09992", "region:us" ]
2023-01-05T04:04:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`hc4/zh`", "viewer": false}
2023-01-05T04:04:15+00:00
49a02b36880687682db45f8478f2ddcf7983da27
# Dataset Card for `neuclir/1/fa` The `neuclir/1/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,232,016 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_fa', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_fa
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/fa`", "viewer": false}
2023-01-05T04:04:26+00:00
d39ee5367d56bae93cf56997780ccdf93db01c09
# Dataset Card for `neuclir/1/ru` The `neuclir/1/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=4,627,543 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_ru', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/ru`", "viewer": false}
2023-01-05T04:04:38+00:00
3cecf7c13de043e5ff05f4380a555fd8ce46bc4d
# Dataset Card for `neuclir/1/zh` The `neuclir/1/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,179,209 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neuclir_1_zh', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/neuclir_1_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T04:04:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neuclir/1/zh`", "viewer": false}
2023-01-05T04:04:49+00:00
c8b71939a9e755c0182e676bea668ac90574e84f
# Dataset Card for "results_valid_100rows_2023-01-05" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joddy/results_valid_100rows_2023-01-05
[ "region:us" ]
2023-01-05T06:45:12+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "resolution", "dtype": "int64"}, {"name": "attributes_loc", "dtype": {"class_label": {"names": {"0": "upper left", "1": "upper right", "2": "lower left", "3": "lower right"}}}}, {"name": "NL_text", "dtype": "string"}, {"name": "bbox_text", "dtype": "string"}, {"name": "center_text", "dtype": "string"}, {"name": "normed_object_bbox", "sequence": "int64"}, {"name": "without_pos_stable-diffusion-v1-5", "dtype": "image"}, {"name": "NL_stable-diffusion-v1-5", "dtype": "image"}, {"name": "bbox_stable-diffusion-v1-5", "dtype": "image"}, {"name": "center_stable-diffusion-v1-5", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_off", "dtype": "image"}, {"name": "NL_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_off", "dtype": "image"}, {"name": "bbox_only_tag_TextENC_off", "dtype": "image"}, {"name": "bbox_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_off", "dtype": "image"}, {"name": "center_only_tag_TextENC_off", "dtype": "image"}, {"name": "center_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_on", "dtype": "image"}, {"name": "NL_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_on", "dtype": "image"}, {"name": "bbox_only_tag_TextENC_on", "dtype": "image"}, {"name": "bbox_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_on", "dtype": "image"}, {"name": "center_only_tag_TextENC_on", "dtype": "image"}, {"name": "center_text_TextENC_on", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1033337709.0, "num_examples": 100}], "download_size": 1023757758, "dataset_size": 1033337709.0}}
2023-01-05T07:47:16+00:00
5c86cb5810cf41501b5e009e68967f021ac5e91d
# Dataset Card for "xquad_ar" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_ar
[ "region:us" ]
2023-01-05T07:17:31+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 1394144.8109243698, "num_examples": 963}, {"name": "validation", "num_bytes": 172277.5, "num_examples": 119}, {"name": "test", "num_bytes": 156352.68907563025, "num_examples": 108}], "download_size": 406718, "dataset_size": 1722775.0}}
2023-01-05T07:17:58+00:00
ad10ba17ccd8c81439459c827f2899b77e45bb69
# Dataset Card for "xquad_tr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zaid/xquad_tr
[ "region:us" ]
2023-01-05T07:18:00+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 979782.9050420168, "num_examples": 963}, {"name": "validation", "num_bytes": 121073.9, "num_examples": 119}, {"name": "test", "num_bytes": 109882.1949579832, "num_examples": 108}], "download_size": 353715, "dataset_size": 1210739.0}}
2023-01-05T07:18:26+00:00
884b0caaf95649264d7b4ff3052c9566866afcc1
# Dataset Card for "results_test_50rows_2023-01-05" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joddy/results_test_50rows_2023-01-05
[ "region:us" ]
2023-01-05T08:27:23+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "resolution", "dtype": "int64"}, {"name": "attributes_loc", "dtype": {"class_label": {"names": {"0": "upper left", "1": "upper right", "2": "lower left", "3": "lower right"}}}}, {"name": "NL_text", "dtype": "string"}, {"name": "bbox_text", "dtype": "string"}, {"name": "center_text", "dtype": "string"}, {"name": "normed_object_bbox", "sequence": "int64"}, {"name": "without_pos_stable-diffusion-v1-5", "dtype": "image"}, {"name": "NL_stable-diffusion-v1-5", "dtype": "image"}, {"name": "bbox_stable-diffusion-v1-5", "dtype": "image"}, {"name": "center_stable-diffusion-v1-5", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_off", "dtype": "image"}, {"name": "NL_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_off", "dtype": "image"}, {"name": "bbox_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_off", "dtype": "image"}, {"name": "center_text_TextENC_off", "dtype": "image"}, {"name": "without_pos_NL_text_TextENC_on", "dtype": "image"}, {"name": "NL_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_bbox_text_TextENC_on", "dtype": "image"}, {"name": "bbox_text_TextENC_on", "dtype": "image"}, {"name": "without_pos_center_text_TextENC_on", "dtype": "image"}, {"name": "center_text_TextENC_on", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 388907687.0, "num_examples": 50}], "download_size": 388936971, "dataset_size": 388907687.0}}
2023-01-05T08:59:14+00:00
b73bb1961701f6d766312a29c292b1a2513b8735
# Numerical Reasoning
lintang/numerical_reasoning_arithmetic
[ "region:us" ]
2023-01-05T08:48:37+00:00
{}
2023-01-09T06:33:43+00:00
07ac45f1d93a401a67420f30959baaceb36c2f26
# Датасет перефразировок коротких фраз (читчат+поэзия) В датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик ([проект диалоговой системы](https://github.com/Koziev/chatbot)) и фрагментов стихов ([проект генеративной поэзии](https://github.com/Koziev/verslibre)). Датасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков: ```paraphrases``` - примеры правильных перефразировок ```distractors``` - примеры неправильных перефразировок Датасет используется для создания моделей [детектора перефразировок sbert_synonymy](https://huggingface.co/inkoziev/sbert_synonymy) и [генеративного поэтического перефразировщика](https://huggingface.co/inkoziev/paraphraser). ## Disclaimer В датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах. К примеру, правильными перефразировками считаются пары "_Помолчи_" и "_Дружище, не говори ни слова!_". Так как перефразировщик используется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных и достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей на его основе в Ваших проектах. ## Другие датасеты перефразировок При обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например [tapaco](https://huggingface.co/datasets/tapaco).
inkoziev/paraphrases
[ "task_categories:sentence-similarity", "task_categories:text2text-generation", "task_ids:semantic-similarity-classification", "language_creators:expert-generated", "language:ru", "license:cc-by-nc-4.0", "region:us" ]
2023-01-05T09:08:02+00:00
{"language_creators": ["expert-generated"], "language": ["ru"], "license": "cc-by-nc-4.0", "task_categories": ["sentence-similarity", "text2text-generation"], "task_ids": ["semantic-similarity-classification"]}
2023-01-14T13:37:24+00:00
ea4ed1143e4baabdaf3a91db95a019e8a9c8a5b0
hand-collected set of 57817 pics mostly from russian internet. pics without captions. датасет из тех самых "прикольных картинок" с дисков и т.п. все картинки с корневой директории полностью собраны ручками. не размечен.
4eJIoBek/gazik-pics-57k
[ "task_categories:unconditional-image-generation", "size_categories:10K<n<100K", "license:wtfpl", "region:us" ]
2023-01-05T09:08:04+00:00
{"license": "wtfpl", "size_categories": ["10K<n<100K"], "task_categories": ["unconditional-image-generation"]}
2023-02-27T15:50:32+00:00
bacc37ac7dce72de829b372babe87f6eaae6abe6
appvizer/product-sheets-in-french
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:fr", "license:mit", "region:us" ]
2023-01-05T09:34:01+00:00
{"language": ["fr"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "viewer": false}
2023-03-06T10:13:50+00:00
c62247536813069dd31c7f7f73496e94fcf3da73
shreya2524/housing1
[ "license:apache-2.0", "region:us" ]
2023-01-05T11:08:12+00:00
{"license": "apache-2.0"}
2023-01-05T11:08:12+00:00
c08bb10aac2746fea1be03e1f09ea95567abf2e3
Someman/nepali-flag
[ "license:mit", "region:us" ]
2023-01-05T11:44:53+00:00
{"license": "mit"}
2023-01-05T11:45:55+00:00
3bf0d4f677da2344fd879104e25b92f3eb2eb9ed
Embrapa Wine Grape Instance Segmentation Dataset – Embrapa WGISD ================================================================ [![DOI](https://zenodo.org/badge/199083745.svg)](https://zenodo.org/badge/latestdoi/199083745) This is a detailed description of the dataset, a *datasheet for the dataset* as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010) Motivation for Dataset Creation ------------------------------- ### Why was the dataset created? Embrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created to provide images and annotation to study *object detection and instance segmentation* for image-based monitoring and field robotics in viticulture. It provides instances from five different grape varieties taken on field. These instances shows variance in grape pose, illumination and focus, including genetic and phenological variations such as shape, color and compactness. ### What (other) tasks could the dataset be used for? Possible uses include relaxations of the instance segmentation problem: classification (Is a grape in the image?), semantic segmentation (What are the "grape pixels" in the image?), object detection (Where are the grapes in the image?), and counting (How many berries are there per cluster?). The WGISD can also be used in grape variety identification. ### Who funded the creation of the dataset? The building of the WGISD dataset was supported by the Embrapa SEG Project 01.14.09.001.05.04, *Image-based metrology for Precision Agriculture and Phenotyping*, and the CNPq PIBIC Program (grants 161165/2017-6 and 125044/2018-6). Dataset Composition ------------------- ### What are the instances? Each instance consists in a RGB image and an annotation describing grape clusters locations as bounding boxes. A subset of the instances also contains binary masks identifying the pixels belonging to each grape cluster. Each image presents at least one grape cluster. Some grape clusters can appear far at the background and should be ignored. ### Are relationships between instances made explicit in the data? File names prefixes identify the variety observed in the instance. | Prefix | Variety | | --- | --- | | CDY | *Chardonnay* | | CFR | *Cabernet Franc* | | CSV | *Cabernet Sauvignon*| | SVB | *Sauvignon Blanc* | | SYH | *Syrah* | ### How many instances of each type are there? The dataset consists of 300 images containing 4,432 grape clusters identified by bounding boxes. A subset of 137 images also contains binary masks identifying the pixels of each cluster. It means that from the 4,432 clusters, 2,020 of them presents binary masks for instance segmentation, as summarized in the following table. |Prefix | Variety | Date | Images | Boxed clusters | Masked clusters| | --- | --- | --- | --- | --- | --- | |CDY | *Chardonnay* | 2018-04-27 | 65 | 840 | 308| |CFR | *Cabernet Franc* | 2018-04-27 | 65 | 1,069 | 513| |CSV | *Cabernet Sauvignon* | 2018-04-27 | 57 | 643 | 306| |SVB | *Sauvignon Blanc* | 2018-04-27 | 65 | 1,316 | 608| |SYH | *Syrah* | 2017-04-27 | 48 | 563 | 285| |Total | | | 300 | 4,431 | 2,020| *General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.* #### Contributions Another subset of 111 images with separated and non-occluded grape clusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky ([Khoroshevsky *et al.*, 2021](https://doi.org/10.1007/978-3-030-65414-6_19)). Theses annotations are available in `test_berries.txt` , `train_berries.txt` and `val_berries.txt` |Prefix | Variety | Berries | | --- | --- | --- | |CDY | *Chardonnay* | 1,102 | |CFR | *Cabernet Franc* | 1,592 | |CSV | *Cabernet Sauvignon* | 1,712 | |SVB | *Sauvignon Blanc* | 1,974 | |SYH | *Syrah* | 969 | |Total | | 7,349 | *Berries annotations by F. Khoroshevsky and S. Khoroshevsky.* Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66)) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in `contrib/berries`. Daniel Angelov (@23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. ### What data does each instance consist of? Each instance contains a 8-bits RGB image and a text file containing one bounding box description per line. These text files follows the "YOLO format" CLASS CX CY W H *class* is an integer defining the object class – the dataset presents only the grape class that is numbered 0, so every line starts with this “class zero” indicator. The center of the bounding box is the point *(c_x, c_y)*, represented as float values because this format normalizes the coordinates by the image dimensions. To get the absolute position, use *(2048 c_x, 1365 c_y)*. The bounding box dimensions are given by *W* and *H*, also normalized by the image size. The instances presenting mask data for instance segmentation contain files presenting the `.npz` extension. These files are compressed archives for NumPy $n$-dimensional arrays. Each array is a *H X W X n_clusters* three-dimensional array where *n_clusters* is the number of grape clusters observed in the image. After assigning the NumPy array to a variable `M`, the mask for the *i*-th grape cluster can be found in `M[:,:,i]`. The *i*-th mask corresponds to the *i*-th line in the bounding boxes file. The dataset also includes the original image files, presenting the full original resolution. The normalized annotation for bounding boxes allows easy identification of clusters in the original images, but the mask data will need to be properly rescaled if users wish to work on the original full resolution. #### Contributions *For `test_berries.txt` , `train_berries.txt` and `val_berries.txt`*: The berries annotations are following a similar notation with the only exception being that each text file (train/val/test) includes also the instance file name. FILENAME CLASS CX CY where *filename* stands for instance file name, *class* is an integer defining the object class (0 for all instances) and the point *(c_x, c_y)* indicates the absolute position of each "dot" indicating a single berry in a well defined cluster. *For `contrib/berries`*: The annotations provide the *(x, y)* point position for each berry center, in a tabular form: X Y These point-based annotations can be easily loaded using, for example, `numpy.loadtxt`. See `WGISD.ipynb`for examples. [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. Also see [COCO format](https://cocodataset.org/#format-data) for the JSON-based format. ### Is everything included or does the data rely on external resources? Everything is included in the dataset. ### Are there recommended data splits or evaluation measures? The dataset comes with specified train/test splits. The splits are found in lists stored as text files. There are also lists referring only to instances presenting binary masks. | | Images | Boxed clusters | Masked clusters | | ---------------------| -------- | ---------------- | ----------------- | | Training/Validation | 242 | 3,581 | 1,612 | | Test | 58 | 850 | 408 | | Total | 300 | 4,431 | 2,020 | *Dataset recommended split.* Standard measures from the information retrieval and computer vision literature should be employed: precision and recall, *F1-score* and average precision as seen in [COCO](http://cocodataset.org) and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC). ### What experiments were initially run on this dataset? The first experiments run on this dataset are described in [*Grape detection, segmentation and tracking using deep neural networks and three-dimensional association*](https://arxiv.org/abs/1907.11819) by Santos *et al.*. See also the following video demo: [![Grape detection, segmentation and tracking](http://img.youtube.com/vi/1Hji3GS4mm4/0.jpg)](http://www.youtube.com/watch?v=1Hji3GS4mm4 "Grape detection, segmentation and tracking") **UPDATE**: The JPG files corresponding to the video frames in the [video demo](http://www.youtube.com/watch?v=1Hji3GS4mm4) are now available in the `extras` directory. Data Collection Process ----------------------- ### How was the data collected? Images were captured at the vineyards of Guaspari Winery, located at Espírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon -46.741618). The winery staff performs dual pruning: one for shaping (after previous year harvest) and one for production, resulting in canopies of lower density. The image capturing was realized in April 2017 for *Syrah* and in April 2018 for the other varieties. A Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were used to capture the images. The cameras were located between the vines lines, facing the vines at distances around 1-2 meters. The EOS REBEL T3i camera captured 240 images, including all *Syrah* pictures. The Z2 smartphone grabbed 60 images covering all varieties except *Syrah* . The REBEL images were scaled to *2048 X 1365* pixels and the Z2 images to *2048 X 1536* pixels. More data about the capture process can be found in the Exif data found in the original image files, included in the dataset. ### Who was involved in the data collection process? T. T. Santos, A. A. Santos and S. Avila captured the images in field. T. T. Santos, L. L. de Souza and S. Avila performed the annotation for bounding boxes and masks. ### How was the data associated with each instance acquired? The rectangular bounding boxes identifying the grape clusters were annotated using the [`labelImg` tool](https://github.com/tzutalin/labelImg). The clusters can be under severe occlusion by leaves, trunks or other clusters. Considering the absence of 3-D data and on-site annotation, the clusters locations had to be defined using only a single-view image, so some clusters could be incorrectly delimited. A subset of the bounding boxes was selected for mask annotation, using a novel tool developed by the authors and presented in this work. This interactive tool lets the annotator mark grape and background pixels using scribbles, and a graph matching algorithm developed by [Noma *et al.*](https://doi.org/10.1016/j.patcog.2011.08.017) is employed to perform image segmentation to every pixel in the bounding box, producing a binary mask representing grape/background classification. #### Contributions A subset of the bounding boxes of well-defined (separated and non-occluded clusters) was used for "dot" (berry) annotations of each grape to serve for counting applications as described in [Khoroshevsky *et al.*](https://doi.org/10.1007/978-3-030-65414-6_19). The berries annotation was performed by F. Khoroshevsky and S. Khoroshevsky. Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66)) provided point-based annotations for berries in all 300 images, summing 187,374 berries. These annotations are available in `contrib/berries`. Deng *et al.* employed [Huawei ModelArt](https://www.huaweicloud.com/en-us/product/modelarts.html), for their annotation effort. Data Preprocessing ------------------ ### What preprocessing/cleaning was done? The following steps were taken to process the data: 1. Bounding boxes were annotated for each image using the `labelImg` tool. 2. Images were resized to *W = 2048* pixels. This resolution proved to be practical to mask annotation, a convenient balance between grape detail and time spent by the graph-based segmentation algorithm. 3. A randomly selected subset of images were employed on mask annotation using the interactive tool based on graph matching. 4. All binaries masks were inspected, in search of pixels attributed to more than one grape cluster. The annotator assigned the disputed pixels to the most likely cluster. 5. The bounding boxes were fitted to the masks, which provided a fine tuning of grape clusters locations. ### Was the “raw” data saved in addition to the preprocessed data? The original resolution images, containing the Exif data provided by the cameras, is available in the dataset. Dataset Distribution -------------------- ### How is the dataset distributed? The dataset is [available at GitHub](https://github.com/thsant/wgisd). ### When will the dataset be released/first distributed? The dataset was released in July, 2019. ### What license (if any) is it distributed under? The data is released under [**Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license)**](https://creativecommons.org/licenses/by-nc/4.0/). There is a request to cite the corresponding paper if the dataset is used. For commercial use, contact Embrapa Agricultural Informatics business office. ### Are there any fees or access/export restrictions? There are no fees or restrictions. For commercial use, contact Embrapa Agricultural Informatics business office. Dataset Maintenance ------------------- ### Who is supporting/hosting/maintaining the dataset? The dataset is hosted at Embrapa Agricultural Informatics and all comments or requests can be sent to [Thiago T. Santos](https://github.com/thsant) (maintainer). ### Will the dataset be updated? There is no scheduled updates. * In May, 2022, [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. * In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries ("dot") annotations. * In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to easier-to-load text files now available in `contrib/berries` directory. In case of further updates, releases will be properly tagged at GitHub. ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Contributors should contact the maintainer by e-mail. ### No warranty The maintainers and their institutions are *exempt from any liability, judicial or extrajudicial, for any losses or damages arising from the use of the data contained in the image database*.
thsant/wgisd
[ "task_categories:object-detection", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:cc-by-nc-4.0", "agriculture", "viticulture", "fruit detection", "arxiv:1803.09010", "arxiv:1907.11819", "region:us" ]
2023-01-05T12:01:39+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "task_ids": [], "pretty_name": "Embrapa Wine Grape Instance Segmentation Dataset \u2013 Embrapa WGISD ", "viewer": false, "tags": ["agriculture", "viticulture", "fruit detection"]}
2023-01-05T17:24:09+00:00
0296ef2a4d400dbfa492c14b2b857c999fc3523a
11,5k russian books in txt format, divided by genres 11,5 тыщ книг русской литературы. датасет сделан из древнющего диска "lib in poc"
4eJIoBek/ru-libinpoc-11k
[ "task_categories:text-generation", "size_categories:10K<n<100K", "license:mit", "region:us" ]
2023-01-05T12:44:21+00:00
{"license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]}
2023-01-09T22:45:47+00:00
8773376cc71070fbbffa4fc13128edabf5e57bf9
Pinwheel/ActsOfAgression
[ "task_categories:video-classification", "size_categories:1K<n<10K", "license:mit", "Fight", "No-Fight", "region:us" ]
2023-01-05T12:51:09+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["video-classification"], "tags": ["Fight", "No-Fight"]}
2023-01-06T11:29:33+00:00
000dd81e685f6c10da8d2c37bd71c2fef0e92d59
# Dataset Card for "txt_to_gls_dts" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fuyulinh04/txt_to_gls_dts
[ "region:us" ]
2023-01-05T13:45:16+00:00
{"dataset_info": {"features": [{"name": "gloss", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10780088.8, "num_examples": 70168}, {"name": "test", "num_bytes": 2695022.2, "num_examples": 17542}], "download_size": 8157820, "dataset_size": 13475111.0}}
2023-01-05T18:34:58+00:00
116c215bb0817203a5255ba5a819b7b40c25a1fa
# Dataset Card for "diachronia-ocr-train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zombely/diachronia-ocr-train
[ "region:us" ]
2023-01-05T14:12:17+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51407894.0, "num_examples": 50}, {"name": "validation", "num_bytes": 10945929.0, "num_examples": 9}], "download_size": 62342762, "dataset_size": 62353823.0}}
2023-01-05T14:12:57+00:00
45e708d930ce42706461bd0f87d2b8dbbca42664
# Dataset Card for "News" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vencortex/News
[ "region:us" ]
2023-01-05T14:13:58+00:00
{"dataset_info": {"features": [{"name": "symbol", "dtype": "string"}, {"name": "publishedDate", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "image", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 834852911, "num_examples": 1495869}], "download_size": 170603751, "dataset_size": 834852911}}
2023-01-05T14:14:16+00:00
1f6b50c209aa111cf7e209f3840ddd408ae55afd
# Dataset Card for "test_6k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pyakymenko/test_6k
[ "region:us" ]
2023-01-05T14:33:32+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475682224.444, "num_examples": 6661}], "download_size": 473720429, "dataset_size": 475682224.444}}
2023-01-05T14:50:56+00:00
dbe96e27d52c70eb0067de744fb5608199d31656
# Dataset Card for "6k_mp3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arnepeine/6k_mp3
[ "region:us" ]
2023-01-05T14:54:25+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 475682224.444, "num_examples": 6661}], "download_size": 473720429, "dataset_size": 475682224.444}}
2023-01-05T15:02:12+00:00
9e098b81d57cc148c8fdd717c17ee85859f7c336
unseeng33k/positioning
[ "license:apache-2.0", "region:us" ]
2023-01-05T15:10:28+00:00
{"license": "apache-2.0"}
2023-01-05T15:10:28+00:00
92d4a4aed4d949023ea948840a94b368447b7f9f
# Dataset Card for ScienceIE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://scienceie.github.io/index.html](https://scienceie.github.io/index.html) - **Repository:** [https://github.com/ScienceIE/scienceie.github.io](https://github.com/ScienceIE/scienceie.github.io) - **Paper:** [SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications](https://arxiv.org/abs/1704.02853) - **Leaderboard:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898) - **Size of downloaded dataset files:** 13.7 MB - **Size of generated dataset files:** 17.4 MB ### Dataset Summary ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents. A corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper. Publications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected. The training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing. There are three subtasks: - Subtask (A): Identification of keyphrases - Given a scientific publication, the goal of this task is to identify all the keyphrases in the document. - Subtask (B): Classification of identified keyphrases - In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL. - PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS. - TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK. - MATERIAL: MATERIAL keyphrases identify the resources used in the paper. - Subtask (C): Extraction of relationships between two identified keyphrases - Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE. - HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color. - SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML. Note: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The `id` consists of the document id and the example index within the document separated by an underscore, e.g. `S0375960115004120_1`. This should enable you to reconstruct the documents from the sentences. ### Supported Tasks and Leaderboards - **Tasks:** Key phrase extraction and relation extraction in scientific documents - **Leaderboards:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898) ### Languages The language in the dataset is English. ## Dataset Structure ### Data Instances #### subtask_a - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_1", "tokens": ["Another", "remarkable", "feature", "of", "the", "quantum", "field", "treatment", "can", "be", "revealed", "from", "the", "investigation", "of", "the", "vacuum", "state", "."], "tags": [0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0] } ``` #### subtask_b - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_2", "tokens": ["For", "a", "classical", "field", ",", "vacuum", "is", "realized", "by", "simply", "setting", "the", "potential", "to", "zero", "resulting", "in", "an", "unaltered", ",", "free", "evolution", "of", "the", "particle", "'s", "plane", "wave", "(", "|ψI〉=|ψIII〉=|k0", "〉", ")", "."], "tags": [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0] } ``` #### subtask_c - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 30.1 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_3", "tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."], "tags": [[0, 0, ...], [0, 0, ...], ...] } ``` Note: The tag sequence consists of vectors for each token, that encode what the relationship between that token and every other token in the sequence is for the first token in each key phrase. #### ner - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 17.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_4", "tokens": ["Let", "'s", "consider", ",", "for", "example", ",", "a", "superconducting", "resonant", "circuit", "as", "source", "of", "the", "field", "."], "tags": [0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0] } ``` #### re - **Size of downloaded dataset files:** 13.7 MB - **Size of the generated dataset:** 16.4 MB An example of 'train' looks as follows: ```json { "id": "S0375960115004120_5", "tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."], "arg1_start": 2, "arg1_end": 4, "arg1_type": "Task", "arg2_start": 5, "arg2_end": 6, "arg2_type": "Material", "relation": 0 } ``` ### Data Fields #### subtask_a - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a `list` of classification labels. ```python {"O": 0, "B": 1, "I": 2} ``` #### subtask_b - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a `list` of classification labels. ```python {"O": 0, "M": 1, "P": 2, "T": 3} ``` #### subtask_c - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a `list` of a `list` of a classification label. ```python {"O": 0, "S": 1, "H": 2} ``` #### ner - `id`: the instance id of this sentence, a `string` feature. - `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `tags`: the list of ner tags of this sentence, a `list` of classification labels. ```python {"O": 0, "B-Material": 1, "I-Material": 2, "B-Process": 3, "I-Process": 4, "B-Task": 5, "I-Task": 6} ``` #### re - `id`: the instance id of this sentence, a `string` feature. - `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features. - `arg1_start`: the 0-based index of the start token of the relation arg1 mention, an `ìnt` feature. - `arg1_end`: the 0-based index of the end token of the relation arg1 mention, exclusive, an `ìnt` feature. - `arg1_type`: the key phrase type of the end token of the relation arg1 mention, a `string` feature. - `arg2_start`: the 0-based index of the start token of the relation arg2 mention, an `ìnt` feature. - `arg2_end`: the 0-based index of the end token of the relation arg2 mention, exclusive, an `ìnt` feature. - `arg2_type`: the key phrase type of the relation arg2 mention, a `string` feature. - `relation`: the relation label of this instance, a classification label. ```python {"O": 0, "Synonym-of": 1, "Hyponym-of": 2} ``` ### Data Splits | | Train | Dev | Test | |-----------|-------|------|------| | subtask_a | 2388 | 400 | 838 | | subtask_b | 2388 | 400 | 838 | | subtask_c | 2388 | 400 | 838 | | ner | 2388 | 400 | 838 | | re | 24558 | 4838 | 6618 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{DBLP:journals/corr/AugensteinDRVM17, author = {Isabelle Augenstein and Mrinal Das and Sebastian Riedel and Lakshmi Vikraman and Andrew McCallum}, title = {SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications}, journal = {CoRR}, volume = {abs/1704.02853}, year = {2017}, url = {http://arxiv.org/abs/1704.02853}, eprinttype = {arXiv}, eprint = {1704.02853}, timestamp = {Mon, 13 Aug 2018 16:46:36 +0200}, biburl = {https://dblp.org/rec/journals/corr/AugensteinDRVM17.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset.
DFKI-SLT/science_ie
[ "task_categories:token-classification", "task_categories:text-classification", "task_ids:named-entity-recognition", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:other", "research papers", "scientific papers", "arxiv:1704.02853", "region:us" ]
2023-01-05T15:32:00+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification", "text-classification"], "task_ids": ["named-entity-recognition", "multi-class-classification"], "pretty_name": "ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents", "tags": ["research papers", "scientific papers"], "dataset_info": [{"config_name": "ner", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-Material", "2": "I-Material", "3": "B-Process", "4": "I-Process", "5": "B-Task", "6": "I-Task"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "re", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "dtype": "string"}, {"name": "arg1_start", "dtype": "int32"}, {"name": "arg1_end", "dtype": "int32"}, {"name": "arg1_type", "dtype": "string"}, {"name": "arg2_start", "dtype": "int32"}, {"name": "arg2_end", "dtype": "int32"}, {"name": "arg2_type", "dtype": "string"}, {"name": "relation", "dtype": {"class_label": {"names": {"0": "O", "1": "Synonym-of", "2": "Hyponym-of"}}}}], "splits": [{"name": "train", "num_bytes": 11738520, "num_examples": 24558}, {"name": "validation", "num_bytes": 2347796, "num_examples": 4838}, {"name": "test", "num_bytes": 2835275, "num_examples": 6618}], "download_size": 13704567, "dataset_size": 16921591}, {"config_name": "subtask_a", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B", "2": "I"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "subtask_b", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "O", "1": "M", "2": "P", "3": "T"}}}}], "splits": [{"name": "train", "num_bytes": 1185670, "num_examples": 2388}, {"name": "validation", "num_bytes": 204095, "num_examples": 400}, {"name": "test", "num_bytes": 399069, "num_examples": 838}], "download_size": 13704567, "dataset_size": 1788834}, {"config_name": "subtask_c", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"sequence": {"class_label": {"names": {"0": "O", "1": "S", "2": "H"}}}}}], "splits": [{"name": "train", "num_bytes": 20103682, "num_examples": 2388}, {"name": "validation", "num_bytes": 3575511, "num_examples": 400}, {"name": "test", "num_bytes": 6431513, "num_examples": 838}], "download_size": 13704567, "dataset_size": 30110706}]}
2023-01-19T11:26:55+00:00
6336d79bd2837d8b278104c852dba600c6abcea6
# Oracle These are scanned images of imaginative text, similar to Chinese oracles, created by the great artist Meiling Han. This dataset can be fed into Generative Adversarial Network to produce similar characters for creating modern art.
KokeCacao/oracle
[ "region:us" ]
2023-01-05T15:36:40+00:00
{}
2023-01-05T16:02:50+00:00
925662cc6c2e4577b23143658b507b6b8622512c
# Dataset Card for aeroBERT-NER ## Dataset Description - **Paper:** aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT - **Point of Contact:** [email protected] ### Dataset Summary This dataset contains sentences from the aerospace requirements domain. The sentences are tagged for five NER categories (SYS, VAL, ORG, DATETIME, and RES) using the BIO tagging scheme. There are a total of 1432 sentences. The creation of this dataset is aimed at - <br> (1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br> (2) Fine-tuning language models for **token identification** (NER) specific to the aerospace domain <br> This dataset can be used for training or fine-tuning language models for the identification of mentioned Named-Entities in aerospace texts. ## Dataset Structure The dataset is of the format: ``Sentence-Number * WordPiece-Token * NER-tag`` <br> "*" is used as a delimiter to avoid confusion with commas (",") that occur in the text. The following example shows the dataset structure for Sentence #1431. <br> 1431\*the\*O <br> 1431\*airplane\*B-SYS <br> 1431\*takeoff\*O <br> 1431\*performance\*O <br> 1431\*must\*O <br> 1431\*be\*O <br> 1431\*determined\*O <br> 1431\*for\*O <br> 1431\*climb\*O <br> 1431\*gradients\*O <br> 1431\*.\*O <br> ## Dataset Creation ### Source Data Two types of aerospace texts are used to create the aerospace corpus for fine-tuning BERT: <br> (1) general aerospace texts such as publications by the National Academy of Space Studies Board, and <br> (2) certification requirements from Title 14 CFR. A total of 1432 sentences from the aerospace domain were included in the corpus. <br> ### Importing dataset into Python environment Use the following code chunk to import the dataset into Python environment as a DataFrame. ``` from datasets import load_dataset import pandas as pd dataset = load_dataset("archanatikayatray/aeroBERT-NER") #Converting the dataset into a pandas DataFrame dataset = pd.DataFrame(dataset["train"]["text"]) dataset = dataset[0].str.split('*', expand = True) #Getting the headers from the first row header = dataset.iloc[0] #Excluding the first row since it contains the headers dataset = dataset[1:] #Assigning the header to the DataFrame dataset.columns = header #Viewing the last 10 rows of the annotated dataset dataset.tail(10) ``` ### Annotations #### Annotation process A Subject Matter Expert (SME) was consulted for deciding on the annotation categories. The BIO Tagging scheme was used for annotating the dataset. **B** - Beginning of entity <br> **I** - Inside an entity <br> **O** - Outside an entity <br> | Category | NER Tags | Example | | :----: | :----: | :----: | | System | B-SYS, I-SYS | exhaust heat exchangers, powerplant, auxiliary power unit | | Value | B-VAL, I-VAL | 1.2 percent, 400 feet, 10 to 19 passengers | | Date time | B-DATETIME, I-DATETIME | 2013, 2019, May 11,1991 | | Organization | B-ORG, I-ORG | DOD, Ames Research Center, NOAA | | Resource | B-RES, I-RES | Section 25-341, Sections 25-173 through 25-177, Part 23 subpart B | The distribution of the various entities in the corpus is shown below - <br> |NER Tag|Description|Count| | :----: | :----: | :----: | O | Tokens that are not identified as any NE | 37686 | B-SYS | Beginning of a system NE | 1915 | I-SYS | Inside a system NE | 1104 | B-VAL | Beginning of a value NE | 659 | I-VAL | Inside a value NE | 507 | B-DATETIME| Beginning of a date time NE | 147 | I-DATETIME | Inside a date time NE | 63 | B-ORG | Beginning of an organization NE | 302 | I-ORG | Inside a organization NE | 227 | B-RES | Beginning of a resource NE |390 | I-RES | Inside a resource NE | 1033 | ### Limitations (1)The dataset is an imbalanced dataset, given that's how language is (not every word is a Named-Entity). Hence, using ``Accuracy`` as a metric for the model performance is NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation. (2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment. Please refer to the Appendix of the paper for information on the test set. ### Citation Information ``` @Article{aeroBERT-NER, AUTHOR = {Tikayat Ray, Archana and Pinon Fischer, Olivia J. and Mavris, Dimitri N. and White, Ryan T. and Cole, Bjorn F.}, TITLE = {aeroBERT-NER: Named-Entity Recognition for Aerospace Requirements Engineering using BERT}, JOURNAL = {AIAA SCITECH 2023 Forum}, YEAR = {2023}, URL = {https://arc.aiaa.org/doi/10.2514/6.2023-2583}, DOI = {10.2514/6.2023-2583} } @phdthesis{tikayatray_thesis, author = {Tikayat Ray, Archana}, title = {Standardization of Engineering Requirements Using Large Language Models}, school = {Georgia Institute of Technology}, year = {2023}, doi = {10.13140/RG.2.2.17792.40961}, URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04} } ```
archanatikayatray/aeroBERT-NER
[ "task_categories:token-classification", "size_categories:n<1K", "language:en", "license:apache-2.0", "NER", "Aerospace", "ORG", "SYS", "DATETIME", "RESOURCE", "VALUE", "doi:10.57967/hf/0470", "region:us" ]
2023-01-05T15:43:58+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["token-classification"], "pretty_name": "all_text_annotation_NER.txt", "tags": ["NER", "Aerospace", "ORG", "SYS", "DATETIME", "RESOURCE", "VALUE"]}
2023-05-20T21:40:58+00:00
b801fd035e0f2eeda6d356db4a485791fd3c64d7
# Dataset Card for UIBert ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/uibert - **Repository:** https://github.com/google-research-datasets/uibert - **Paper:** https://arxiv.org/abs/2107.13731 - **Leaderboard:** - UIBert: https://arxiv.org/abs/2107.13731 - Pix2Struct: https://arxiv.org/pdf/2210.03347 ### Dataset Summary This is a Hugging Face formatted dataset derived from the [Google UIBert dataset](https://github.com/google-research-datasets/uibert), which is in turn derived from the [RICO dataset](https://interactionmining.org/rico). ### Supported Tasks and Leaderboards - UI Understanding - UI Referring Expressions - UI Action Automation ### Languages - English ## Dataset Structure - `screenshot`: blob of pixels. - `prompt`: Prompt referring to a UI component with an optional action verb. For example "click on search button next to menu drawer." - `target_bounding_box`: Bounding box of targeted UI components. `[xmin, ymin, xmax, ymax]` ### Data Splits - train: 15K samples - validation: 471 samples - test: 565 samples ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
ivelin/ui_refexp
[ "task_categories:image-to-text", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "ui-referring-expression", "ui-refexp", "arxiv:2107.13731", "arxiv:2210.03347", "region:us" ]
2023-01-05T16:32:50+00:00
{"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "UI understanding", "tags": ["ui-referring-expression", "ui-refexp"], "dataset_info": {"features": [{"name": "screenshot", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "target_bounding_box", "dtype": "string"}], "config_name": "ui_refexp", "splits": [{"name": "train", "num_bytes": 562037265, "num_examples": 15624}, {"name": "validation", "num_bytes": 60399225, "num_examples": 471}, {"name": "test", "num_bytes": 69073969, "num_examples": 565}], "download_size": 6515012176, "dataset_size": 691510459}}
2023-01-08T03:33:10+00:00
6e6baf9444ff16c5a1131c73fb510adc73319b3a
# Dataset Card for "subj_multi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bstrai/subj_multi
[ "region:us" ]
2023-01-05T16:35:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "objective", "1": "subjective"}}}}, {"name": "language", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2914488, "num_examples": 16000}, {"name": "train", "num_bytes": 11518066, "num_examples": 64000}], "download_size": 8870704, "dataset_size": 14432554}}
2023-01-17T17:17:15+00:00
0f29b41e2ee582182ccfc9413342d7f3d411c67b
# AutoTrain Dataset for project: code-explainer ## Dataset Description This dataset has been automatically processed by AutoTrain for project code-explainer. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "def upload_to_s3(local_file, bucket, s3_file):\n ## This function is responsible for uploading the file into the S3 bucket using the specified credentials. \n s3 = boto3.client('s3', aws_access_key_id=ACCESS_KEY,\n aws_secret_access_key=SECRET_KEY)\n try:\n s3.upload_file(local_file, bucket, s3_file)\n print(\"Upload Successful\")\n return True\n except FileNotFoundError:\n print(\"The file was not found\")\n return False\n except NoCredentialsError:\n print(\"Credentials not available\")\n return False\n\n\nresult = upload_to_s3(LOCAL_FILE, BUCKET_NAME, S3_FILE_NAME)", "target": "Create a function upload_to_s3 the fumction is responsible for uploading the file into the s3 bucket to do so\n1. First creating a client object that will be used to interact with the S3 service using the boto3\n(Boto3 makes it easy to integrate your Python application, library, or script with AWS services including Amazon S3, Amazon EC2, Amazon DynamoDB, and more.)\n2. We make a use of try/catch block to upload the images in s3 bucket \n3. To upload the image we use the upload_file function of s3 client if the upload is successful will return the True with print statement.\n4. In case of exception first is FileNotFoundError will return the false.\n\n(Any message with the contents FileNotFoundError indicates that Python cannot find the file you are referencing. Python raises this error because your program cannot continue running without being able to access the file to which your program refers. )\n\n5. The next except block is NoCredentialsError will return the False along with print statement\n\n(The NoCredentialsError is an error encountered when using the Boto3 library to interface with Amazon Web Services (AWS). Specifically, this error is encountered when your AWS credentials are missing, invalid, or cannot be located by your Python script.)\n", "feat_language": "python", "feat_status": "annotated", "feat_user_created": "6888d00e-fda2-4061-9038-7a86b12c9d9b" }, { "text": "def main(username):\n banner()\n '''main function accept instagram username\n return an dictionary object containging profile deatils\n '''\n\n url = \"https://www.instagram.com/{}/?hl=en\".format(username)\n page = requests.get(url)\n tree = html.fromstring(page.content)\n data = tree.xpath('//meta[starts-with(@name,\"description\")]/@content')\n\n if data:\n data = tree.xpath('//meta[starts-with(@name,\"description\")]/@content')\n data = data[0].split(', ')\n followers = data[0][:-9].strip()\n following = data[1][:-9].strip()\n posts = re.findall(r'\\d+[,]*', data[2])[0]\n name = re.findall(r'name\":\"([^\"]+)\"', page.text)[0]\n aboutinfo = re.findall(r'\"description\":\"([^\"]+)\"', page.text)[0]\n instagram_profile = {\n 'success': True,\n 'profile': {\n 'name': name,\n 'profileurl': url,\n 'username': username,\n 'followers': followers,\n 'following': following,\n 'posts': posts,\n 'aboutinfo': aboutinfo\n }\n }\n else:\n instagram_profile = {\n 'success': False,\n 'profile': {}\n }\n return instagram_profile\n", "target": "Create a function main that accepts an Instagram username and returns a dictionary object containing profile details.\n1. The code first requests the URL of the user's profile from Instagram, then it parses out all of the information on that page into variables.\n2. Then xpath is used to find all tags within this HTML document starting with \"description\" and splitting them by commas until there are no more results found.\n3 we use the findall function of re module and find the post name info and store it in the dictionary and return the dictionary.\n4. Else will just return the dictionary with success is False.\n", "feat_language": "python", "feat_status": "annotated", "feat_user_created": "6888d00e-fda2-4061-9038-7a86b12c9d9b" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)", "feat_language": "Value(dtype='string', id=None)", "feat_status": "Value(dtype='string', id=None)", "feat_user_created": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 92 | | valid | 23 |
sagard21/autotrain-data-code-explainer
[ "task_categories:summarization", "region:us" ]
2023-01-05T18:02:20+00:00
{"task_categories": ["summarization"]}
2023-01-05T18:03:02+00:00
f62a921ce2d9906c5d41db2631381fcc6a9e2c06
# Dataset Card for "hearthstone-cards-512" # Not affiliated in anyway with Blizzard nor Hearthstone # Please note that this entrie dataset contains copyrighted matirial
Norod78/hearthstone-cards-512
[ "task_categories:text-to-image", "size_categories:n<10K", "blizzard", "hearthstone", "game cards", "region:us" ]
2023-01-05T18:41:08+00:00
{"size_categories": ["n<10K"], "task_categories": ["text-to-image"], "pretty_name": "Blizzard Hearthstone cards, resized to 512x512 with OCR text field", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 230518521.36, "num_examples": 2952}], "download_size": 230628184, "dataset_size": 230518521.36}, "tags": ["blizzard", "hearthstone", "game cards"]}
2023-01-05T18:48:19+00:00
81483a2eb455bc5c5afa925f8ab2dc9976b99ff6
Imppres, but it works https://github.com/facebookresearch/Imppres ``` @inproceedings{jeretic-etal-2020-natural, title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}", author = "Jereti\v{c}, Paloma and Warstadt, Alex and Bhooshan, Suvrat and Williams, Adina", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.768", doi = "10.18653/v1/2020.acl-main.768", pages = "8690--8705", abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.", } ```
tasksource/imppres
[ "task_categories:text-classification", "task_ids:natural-language-inference", "language:en", "license:apache-2.0", "region:us" ]
2023-01-05T20:14:45+00:00
{"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"]}
2023-06-21T11:52:43+00:00
7349638eada130a409f81331b2d7be88f196201f
hsong1101/news_summarization
[ "license:pddl", "region:us" ]
2023-01-05T22:01:49+00:00
{"license": "pddl", "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4643521852, "num_examples": 696389}, {"name": "test", "num_bytes": 1160885464, "num_examples": 174098}], "download_size": 978222798, "dataset_size": 5804407316}}
2023-01-05T22:22:21+00:00
03f4167589d129223f29c61e324311c80df56b8e
# Dataset Card for "dataset_glstxt" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fuyulinh04/dataset_glstxt
[ "region:us" ]
2023-01-05T23:20:43+00:00
{"dataset_info": {"features": [{"name": "gloss", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11227076.8, "num_examples": 73696}, {"name": "test", "num_bytes": 2806769.2, "num_examples": 18424}], "download_size": 8513566, "dataset_size": 14033846.0}}
2023-01-05T23:21:14+00:00
abc389e1efad2be4e2d214b7af0d775217a3a188
# Character Embedding - Princess Tutu/Ahiru ![princess_tutu_showcase.png](https://s3.amazonaws.com/moonup/production/uploads/1672973706523-6366fabccbf2cf32918c2830.png) ## Usage To use an embedding, download the .pt file and place it in "\stable-diffusion-webui\embeddings". In your prompt, write ```"princess_tutu-6500"```. ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
kxly/princess_tutu
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2023-01-06T02:00:33+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "pretty_name": "Princess Tutu", "thumbnail": "https://huggingface.co/datasets/kxly/princess_tutu/blob/main/princess_tutu_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2023-01-06T02:55:47+00:00
38ff03d68347aaf694e598c50cb164191f50f61c
# Dataset Card for DrugProt ## Dataset Description - **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/ - **Pubmed:** True - **Public:** True - **Tasks:** NER,RE The DrugProt corpus consists of a) expert-labelled chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types. The corpus was introduced in context of the BioCreative VII Track 1 (Text mining drug and chemical-protein interactions). ## Citation Information ``` @inproceedings{miranda2021overview, title={Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of \ drug-gene/protein relations}, author={Miranda, Antonio and Mehryary, Farrokh and Luoma, Jouni and Pyysalo, Sampo and Valencia, Alfonso \ and Krallinger, Martin}, booktitle={Proceedings of the seventh BioCreative challenge evaluation workshop}, year={2021} } ```
bigbio/drugprot
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
2023-01-06T03:27:49+00:00
{"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "DrugProt", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-1/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]}
2023-01-06T03:30:02+00:00
970237b9a7497de2e3a925113b8c20be87a3abf5
# Dataset Card for CPI ## Dataset Description - **Homepage:** https://github.com/KerstenDoering/CPI-Pipeline - **Pubmed:** True - **Public:** True - **Tasks:** NER,NED,RE The compound-protein relationship (CPI) dataset consists of 2,613 sentences from abstracts containing annotations of proteins, small molecules, and their relationships. ## Citation Information ``` @article{doring2020automated, title={Automated recognition of functional compound-protein relationships in literature}, author={D{\"o}ring, Kersten and Qaseem, Ammar and Becer, Michael and Li, Jianyu and Mishra, Pankaj and Gao, Mingjie and Kirchner, Pascal and Sauter, Florian and Telukunta, Kiran K and Moumbock, Aur{\'e}lien FA and others}, journal={Plos one}, volume={15}, number={3}, pages={e0220925}, year={2020}, publisher={Public Library of Science San Francisco, CA USA} } ```
bigbio/cpi
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2023-01-06T03:44:03+00:00
{"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "CPI", "bigbio_language": ["English"], "bigbio_license_shortname": "ISC", "homepage": "https://github.com/KerstenDoering/CPI-Pipeline", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]}
2023-01-06T03:46:05+00:00
0294b3bd3bfc4586f9e0be72ff5218deb032f8e0
# Dataset Card for "arxiv_mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xvjiarui/arxiv_mini
[ "region:us" ]
2023-01-06T03:55:50+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3966992.0, "num_examples": 11}, {"name": "validation", "num_bytes": 7430590.0, "num_examples": 21}], "download_size": 11396049, "dataset_size": 11397582.0}}
2023-01-06T03:56:02+00:00
7b26911122fc049fec6a89a3e8b8d59f41e9fafe
# Dataset Card for "dreambooth-hackathon-images-srkman-2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Xhaheen/dreambooth-hackathon-images-srkman-2
[ "region:us" ]
2023-01-06T04:14:08+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4082680.0, "num_examples": 20}], "download_size": 4081453, "dataset_size": 4082680.0}}
2023-01-06T04:14:11+00:00
c06d338555dc45ca0beeaa1359170e3464a52c8d
# Dataset Card for "blah" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
deepaksingh/blah
[ "region:us" ]
2023-01-06T04:35:21+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "10k", "1": "2.5k", "2": "5k"}}}}], "splits": [{"name": "train", "num_bytes": 888512750.0, "num_examples": 348}], "download_size": 888503946, "dataset_size": 888512750.0}}
2023-01-06T04:39:43+00:00
3dab750b0c52d17267a63dbf7e629481c57d82ad
shreya2524/housing2
[ "license:apache-2.0", "region:us" ]
2023-01-06T05:52:48+00:00
{"license": "apache-2.0"}
2023-01-06T05:52:48+00:00
4ae21f679f3b2fdb2102f04d9ef104bbdc8714b9
# Green patents dataset - num_rows: 9145 - features: [title, label] - label: 0, 1 The dataset contains patent titles that are labeled as 1 (="green") and 0 (="not green"). "green" patents titles were gathered by searching for CPC class "Y02" with Google Patents (query: "status:APPLICATION type:PATENT (Y02) country:EP,US", 05/01/2023). "not green" patents titles are derived from the [HUPD dataset](https://huggingface.co/datasets/HUPD/hupd) (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with "Y".
cwinkler/green_patents
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-01-06T06:12:33+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]}
2023-01-08T09:16:25+00:00
d591b39ccd6886510e7b1957542c7855cc1b81c8
# Dataset Card for "dreambooth-hackathon-owczarek" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
misza222/dreambooth-hackathon-owczarek
[ "region:us" ]
2023-01-06T06:21:34+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3487329.0, "num_examples": 16}], "download_size": 3488676, "dataset_size": 3487329.0}}
2023-01-06T06:21:38+00:00
26fbfb87fab0216775081b969468814000ea1b70
# Dataset Card for "bookcorpus_compact_1024_shard0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard0_of_10
[ "region:us" ]
2023-01-06T07:01:59+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 738086319, "num_examples": 61605}], "download_size": 371729131, "dataset_size": 738086319}}
2023-01-06T07:02:33+00:00
1022bb8dace895db459f90f31ad27f486d80e13e
# Dataset Card for HunSum-1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description ### Dataset Summary The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites. ### Supported Tasks and Leaderboards - 'summarization' - 'title generation' ## Dataset Structure ### Data Fields - `uuid`: a string containing the unique id - `article`: a string containing the body of the news article - `lead`: a string containing the lead of the article - `title`: a string containing the title of the article - `url`: a string containing the URL for the article - `domain`: a string containing the domain of the url - `date_of_creation`: a timestamp containing the date when the article was created - `tags`: a sequence containing the tags of the article ### Data Splits The HunSum-1 dataset has 3 splits: _train_, _validation_, and _test_. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 1,144,255 | | Validation | 1996 | | Test | 1996 | ## Citation If you use our dataset, please cite the following paper: ``` @inproceedings {HunSum-1, title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}}, booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)}, year = {2023}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit}, pages = {231--243} } ```
SZTAKI-HLT/HunSum-1
[ "task_categories:summarization", "task_ids:news-articles-summarization", "multilinguality:monolingual", "language:hu", "license:cc-by-nc-sa-4.0", "region:us" ]
2023-01-06T07:42:26+00:00
{"language": ["hu"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "pretty_name": "HunSum-1"}
2023-01-24T16:21:00+00:00
dd7cea1a69b543124ba399fb14981d530b0acc2a
# Dataset Card for "bookcorpus_compact_1024_shard1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard1_of_10
[ "region:us" ]
2023-01-06T07:48:43+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 733627676, "num_examples": 61605}], "download_size": 367870833, "dataset_size": 733627676}}
2023-01-06T07:49:12+00:00
1b6b830016de70123bff37b29dfc2525ace9a3cc
# Dataset Card for "bookcorpus_compact_1024_shard3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
saibo/bookcorpus_compact_1024_shard3_of_10
[ "region:us" ]
2023-01-06T08:25:12+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 764655737, "num_examples": 61605}], "download_size": 384654577, "dataset_size": 764655737}}
2023-01-06T08:27:21+00:00
5262bf3395485bbc0fa6de9bf9edf373e9be7b21
# Dataset Card for "rico-screen2words" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pinkmooncake/rico-screen2words
[ "region:us" ]
2023-01-06T09:09:06+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 454423304.26, "num_examples": 4310}, {"name": "dev", "num_bytes": 246957743.116, "num_examples": 2364}, {"name": "train", "num_bytes": 1737030544.084, "num_examples": 15743}], "download_size": 1897987283, "dataset_size": 2438411591.46}}
2023-01-07T04:18:11+00:00