sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
87989a49e98945c2be5bc022392bc40a97d6311b
# Dataset Card for `mr-tydi/ja/train` The `mr-tydi/ja/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ja/train). # Data This dataset provides: - `queries` (i.e., topics); count=3,697 - `qrels`: (relevance assessments); count=3,697 - For `docs`, use [`irds/mr-tydi_ja`](https://huggingface.co/datasets/irds/mr-tydi_ja) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ja_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ja_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ja_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ja", "region:us" ]
2023-01-05T03:35:55+00:00
{"source_datasets": ["irds/mr-tydi_ja"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ja/train`", "viewer": false}
2023-01-05T03:36:01+00:00
19d49b07a7ca272e41bdf4a611d03afb80da86a4
# Dataset Card for `mr-tydi/ko` The `mr-tydi/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,496,126 - `queries` (i.e., topics); count=2,019 - `qrels`: (relevance assessments); count=2,116 This dataset is used by: [`mr-tydi_ko_dev`](https://huggingface.co/datasets/irds/mr-tydi_ko_dev), [`mr-tydi_ko_test`](https://huggingface.co/datasets/irds/mr-tydi_ko_test), [`mr-tydi_ko_train`](https://huggingface.co/datasets/irds/mr-tydi_ko_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_ko', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_ko', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ko', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ko
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:36:06+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ko`", "viewer": false}
2023-01-05T03:36:12+00:00
50fbf3dc790d69f8223a40f7b507d1c9c83560d1
# Dataset Card for `mr-tydi/ko/dev` The `mr-tydi/ko/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko/dev). # Data This dataset provides: - `queries` (i.e., topics); count=303 - `qrels`: (relevance assessments); count=307 - For `docs`, use [`irds/mr-tydi_ko`](https://huggingface.co/datasets/irds/mr-tydi_ko) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ko_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ko_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ko_dev
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ko", "region:us" ]
2023-01-05T03:36:17+00:00
{"source_datasets": ["irds/mr-tydi_ko"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ko/dev`", "viewer": false}
2023-01-05T03:36:23+00:00
b8d4a4c1b9d152a933a3a52117d2ebadcb944675
nc33/yesno_qna
[ "license:mit", "region:us" ]
2023-01-05T03:36:23+00:00
{"license": "mit"}
2023-01-07T03:56:21+00:00
e4ad9adfe178042d250dbd9e7a41933aad34c331
# Dataset Card for `mr-tydi/ko/test` The `mr-tydi/ko/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko/test). # Data This dataset provides: - `queries` (i.e., topics); count=421 - `qrels`: (relevance assessments); count=492 - For `docs`, use [`irds/mr-tydi_ko`](https://huggingface.co/datasets/irds/mr-tydi_ko) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ko_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ko_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ko_test
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ko", "region:us" ]
2023-01-05T03:36:28+00:00
{"source_datasets": ["irds/mr-tydi_ko"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ko/test`", "viewer": false}
2023-01-05T03:36:34+00:00
b003a7ba3223a88f771eb72917da601b429f7793
# Dataset Card for `mr-tydi/ko/train` The `mr-tydi/ko/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko/train). # Data This dataset provides: - `queries` (i.e., topics); count=1,295 - `qrels`: (relevance assessments); count=1,317 - For `docs`, use [`irds/mr-tydi_ko`](https://huggingface.co/datasets/irds/mr-tydi_ko) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ko_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ko_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ko_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ko", "region:us" ]
2023-01-05T03:36:40+00:00
{"source_datasets": ["irds/mr-tydi_ko"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ko/train`", "viewer": false}
2023-01-05T03:36:45+00:00
921f7bc8537c94f8922de61012d2c2a1b6ef0210
# Dataset Card for `mr-tydi/ru` The `mr-tydi/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=9,597,504 - `queries` (i.e., topics); count=7,763 - `qrels`: (relevance assessments); count=7,909 This dataset is used by: [`mr-tydi_ru_dev`](https://huggingface.co/datasets/irds/mr-tydi_ru_dev), [`mr-tydi_ru_test`](https://huggingface.co/datasets/irds/mr-tydi_ru_test), [`mr-tydi_ru_train`](https://huggingface.co/datasets/irds/mr-tydi_ru_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_ru', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_ru', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ru', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:36:51+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ru`", "viewer": false}
2023-01-05T03:36:56+00:00
a8ee1a892b77782a131f506ff02d1fc77e8729e4
# Dataset Card for `mr-tydi/ru/dev` The `mr-tydi/ru/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ru/dev). # Data This dataset provides: - `queries` (i.e., topics); count=1,375 - `qrels`: (relevance assessments); count=1,375 - For `docs`, use [`irds/mr-tydi_ru`](https://huggingface.co/datasets/irds/mr-tydi_ru) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ru_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ru_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ru_dev
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ru", "region:us" ]
2023-01-05T03:37:02+00:00
{"source_datasets": ["irds/mr-tydi_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ru/dev`", "viewer": false}
2023-01-05T03:37:08+00:00
dc72ad423ea653df9a152901e94b70459b867200
# Dataset Card for `mr-tydi/ru/test` The `mr-tydi/ru/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ru/test). # Data This dataset provides: - `queries` (i.e., topics); count=995 - `qrels`: (relevance assessments); count=1,168 - For `docs`, use [`irds/mr-tydi_ru`](https://huggingface.co/datasets/irds/mr-tydi_ru) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ru_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ru_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ru_test
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ru", "region:us" ]
2023-01-05T03:37:13+00:00
{"source_datasets": ["irds/mr-tydi_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ru/test`", "viewer": false}
2023-01-05T03:37:19+00:00
fba320652c0956c9cc9ae9c329a595f4edf5907e
# Dataset Card for `mr-tydi/ru/train` The `mr-tydi/ru/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ru/train). # Data This dataset provides: - `queries` (i.e., topics); count=5,366 - `qrels`: (relevance assessments); count=5,366 - For `docs`, use [`irds/mr-tydi_ru`](https://huggingface.co/datasets/irds/mr-tydi_ru) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_ru_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_ru_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_ru_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_ru", "region:us" ]
2023-01-05T03:37:24+00:00
{"source_datasets": ["irds/mr-tydi_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/ru/train`", "viewer": false}
2023-01-05T03:37:30+00:00
e2b0c8a8bd1c0ab385155db752a5a4b617c0acd3
# Dataset Card for `mr-tydi/sw` The `mr-tydi/sw` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/sw). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=136,689 - `queries` (i.e., topics); count=3,271 - `qrels`: (relevance assessments); count=3,767 This dataset is used by: [`mr-tydi_sw_dev`](https://huggingface.co/datasets/irds/mr-tydi_sw_dev), [`mr-tydi_sw_test`](https://huggingface.co/datasets/irds/mr-tydi_sw_test), [`mr-tydi_sw_train`](https://huggingface.co/datasets/irds/mr-tydi_sw_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_sw', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_sw', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_sw', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_sw
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:37:35+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/sw`", "viewer": false}
2023-01-05T03:37:41+00:00
7d5afad39aae315f9b7791f43125af82bd71d691
# Dataset Card for `mr-tydi/sw/dev` The `mr-tydi/sw/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/sw/dev). # Data This dataset provides: - `queries` (i.e., topics); count=526 - `qrels`: (relevance assessments); count=623 - For `docs`, use [`irds/mr-tydi_sw`](https://huggingface.co/datasets/irds/mr-tydi_sw) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_sw_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_sw_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_sw_dev
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_sw", "region:us" ]
2023-01-05T03:37:46+00:00
{"source_datasets": ["irds/mr-tydi_sw"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/sw/dev`", "viewer": false}
2023-01-05T03:37:52+00:00
516c5036fd3342954acbb1d68a7d4d01372a5bbf
# Dataset Card for `mr-tydi/sw/test` The `mr-tydi/sw/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/sw/test). # Data This dataset provides: - `queries` (i.e., topics); count=670 - `qrels`: (relevance assessments); count=743 - For `docs`, use [`irds/mr-tydi_sw`](https://huggingface.co/datasets/irds/mr-tydi_sw) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_sw_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_sw_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_sw_test
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_sw", "region:us" ]
2023-01-05T03:37:57+00:00
{"source_datasets": ["irds/mr-tydi_sw"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/sw/test`", "viewer": false}
2023-01-05T03:38:03+00:00
46fff70c898d5f126547e7441e476b0e85261fca
# Dataset Card for `mr-tydi/sw/train` The `mr-tydi/sw/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/sw/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,072 - `qrels`: (relevance assessments); count=2,401 - For `docs`, use [`irds/mr-tydi_sw`](https://huggingface.co/datasets/irds/mr-tydi_sw) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_sw_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_sw_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_sw_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_sw", "region:us" ]
2023-01-05T03:38:09+00:00
{"source_datasets": ["irds/mr-tydi_sw"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/sw/train`", "viewer": false}
2023-01-05T03:38:14+00:00
639e860df1f9b34602230f4318d0fbf898bd960c
# Dataset Card for `mr-tydi/te` The `mr-tydi/te` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=548,224 - `queries` (i.e., topics); count=5,517 - `qrels`: (relevance assessments); count=5,540 This dataset is used by: [`mr-tydi_te_dev`](https://huggingface.co/datasets/irds/mr-tydi_te_dev), [`mr-tydi_te_test`](https://huggingface.co/datasets/irds/mr-tydi_te_test), [`mr-tydi_te_train`](https://huggingface.co/datasets/irds/mr-tydi_te_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_te', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_te', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_te', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_te
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:38:20+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/te`", "viewer": false}
2023-01-05T03:38:25+00:00
968e3f3407be0e6883971ff27778f4178c3fd370
# Dataset Card for `mr-tydi/te/dev` The `mr-tydi/te/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te/dev). # Data This dataset provides: - `queries` (i.e., topics); count=983 - `qrels`: (relevance assessments); count=983 - For `docs`, use [`irds/mr-tydi_te`](https://huggingface.co/datasets/irds/mr-tydi_te) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_te_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_te_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_te_dev
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_te", "region:us" ]
2023-01-05T03:38:31+00:00
{"source_datasets": ["irds/mr-tydi_te"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/te/dev`", "viewer": false}
2023-01-05T03:38:36+00:00
43fba1a998336cc7d576b25d65e6b89e4c482ba5
# Dataset Card for `mr-tydi/te/test` The `mr-tydi/te/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te/test). # Data This dataset provides: - `queries` (i.e., topics); count=646 - `qrels`: (relevance assessments); count=677 - For `docs`, use [`irds/mr-tydi_te`](https://huggingface.co/datasets/irds/mr-tydi_te) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_te_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_te_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_te_test
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_te", "region:us" ]
2023-01-05T03:38:42+00:00
{"source_datasets": ["irds/mr-tydi_te"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/te/test`", "viewer": false}
2023-01-05T03:38:48+00:00
9c297c01a47429ac07ce6a478f064e4311109e0a
# Dataset Card for `mr-tydi/te/train` The `mr-tydi/te/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te/train). # Data This dataset provides: - `queries` (i.e., topics); count=3,880 - `qrels`: (relevance assessments); count=3,880 - For `docs`, use [`irds/mr-tydi_te`](https://huggingface.co/datasets/irds/mr-tydi_te) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_te_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_te_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_te_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_te", "region:us" ]
2023-01-05T03:38:53+00:00
{"source_datasets": ["irds/mr-tydi_te"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/te/train`", "viewer": false}
2023-01-05T03:38:59+00:00
95331d87066c62a0c61a298d0e3d66c113bd3967
# Dataset Card for `mr-tydi/th` The `mr-tydi/th` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/th). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=568,855 - `queries` (i.e., topics); count=5,322 - `qrels`: (relevance assessments); count=5,545 This dataset is used by: [`mr-tydi_th_dev`](https://huggingface.co/datasets/irds/mr-tydi_th_dev), [`mr-tydi_th_test`](https://huggingface.co/datasets/irds/mr-tydi_th_test), [`mr-tydi_th_train`](https://huggingface.co/datasets/irds/mr-tydi_th_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_th', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_th', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_th', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_th
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:39:04+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/th`", "viewer": false}
2023-01-05T03:39:10+00:00
3b5294ae5f504da36b7db4a6c5aef560ef131d0e
# Dataset Card for `mr-tydi/th/dev` The `mr-tydi/th/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/th/dev). # Data This dataset provides: - `queries` (i.e., topics); count=807 - `qrels`: (relevance assessments); count=817 - For `docs`, use [`irds/mr-tydi_th`](https://huggingface.co/datasets/irds/mr-tydi_th) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_th_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_th_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_th_dev
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_th", "region:us" ]
2023-01-05T03:39:15+00:00
{"source_datasets": ["irds/mr-tydi_th"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/th/dev`", "viewer": false}
2023-01-05T03:39:21+00:00
188293dbdf922e1eef8f4f9c16e72dc9c93f603f
# Dataset Card for `mr-tydi/th/test` The `mr-tydi/th/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/th/test). # Data This dataset provides: - `queries` (i.e., topics); count=1,190 - `qrels`: (relevance assessments); count=1,368 - For `docs`, use [`irds/mr-tydi_th`](https://huggingface.co/datasets/irds/mr-tydi_th) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_th_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_th_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_th_test
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_th", "region:us" ]
2023-01-05T03:39:26+00:00
{"source_datasets": ["irds/mr-tydi_th"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/th/test`", "viewer": false}
2023-01-05T03:39:32+00:00
7a1c0d528c5af261b3bf83d78d17162197b7f784
# Dataset Card for `mr-tydi/th/train` The `mr-tydi/th/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/th/train). # Data This dataset provides: - `queries` (i.e., topics); count=3,319 - `qrels`: (relevance assessments); count=3,360 - For `docs`, use [`irds/mr-tydi_th`](https://huggingface.co/datasets/irds/mr-tydi_th) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mr-tydi_th_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_th_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
irds/mr-tydi_th_train
[ "task_categories:text-retrieval", "source_datasets:irds/mr-tydi_th", "region:us" ]
2023-01-05T03:39:37+00:00
{"source_datasets": ["irds/mr-tydi_th"], "task_categories": ["text-retrieval"], "pretty_name": "`mr-tydi/th/train`", "viewer": false}
2023-01-05T03:39:43+00:00
956bce82788ff308635d06327bb0bd48cd56a3be
# Dataset Card for `msmarco-document` The `msmarco-document` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,213,835 This dataset is used by: [`msmarco-document_trec-dl-hard`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard), [`msmarco-document_trec-dl-hard_fold1`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold1), [`msmarco-document_trec-dl-hard_fold2`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold2), [`msmarco-document_trec-dl-hard_fold3`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold3), [`msmarco-document_trec-dl-hard_fold4`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold4), [`msmarco-document_trec-dl-hard_fold5`](https://huggingface.co/datasets/irds/msmarco-document_trec-dl-hard_fold5) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/msmarco-document', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'title': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:39:49+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document`", "viewer": false}
2023-01-05T03:39:55+00:00
35ededfdecf541b9a9249f636035c6b302387e0c
# Dataset Card for `msmarco-document/trec-dl-hard` The `msmarco-document/trec-dl-hard` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=8,544 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:00+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard`", "viewer": false}
2023-01-05T03:40:06+00:00
13ab0f3901a4a6e210e786a2edb7a5af8be22233
# Dataset Card for `msmarco-document/trec-dl-hard/fold1` The `msmarco-document/trec-dl-hard/fold1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold1). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=1,557 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold1
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:11+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold1`", "viewer": false}
2023-01-05T03:40:17+00:00
b7a4726f6a41d14709243cf99043230d44a86480
# Dataset Card for `msmarco-document/trec-dl-hard/fold2` The `msmarco-document/trec-dl-hard/fold2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold2). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=1,345 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold2', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold2
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:22+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold2`", "viewer": false}
2023-01-05T03:40:28+00:00
488b004ede7268c778980a695ff5b731ddb14e68
# Dataset Card for `msmarco-document/trec-dl-hard/fold3` The `msmarco-document/trec-dl-hard/fold3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold3). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=474 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold3', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold3
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:33+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold3`", "viewer": false}
2023-01-05T03:40:39+00:00
d4729675ebbbfd4ee7f2d129d2283a28726d2eda
# Dataset Card for `msmarco-document/trec-dl-hard/fold4` The `msmarco-document/trec-dl-hard/fold4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold4). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=1,054 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold4', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold4
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:45+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold4`", "viewer": false}
2023-01-05T03:40:50+00:00
8595e58f99370737775502e627af1720e70fbdae
# Dataset Card for `msmarco-document/trec-dl-hard/fold5` The `msmarco-document/trec-dl-hard/fold5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document#msmarco-document/trec-dl-hard/fold5). # Data This dataset provides: - `queries` (i.e., topics); count=10 - `qrels`: (relevance assessments); count=4,114 - For `docs`, use [`irds/msmarco-document`](https://huggingface.co/datasets/irds/msmarco-document) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document_trec-dl-hard_fold5', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document_trec-dl-hard_fold5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Mackie2021DlHard, title={How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset}, author={Iain Mackie and Jeffrey Dalton and Andrew Yates}, journal={ArXiv}, year={2021}, volume={abs/2105.07975} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document_trec-dl-hard_fold5
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document", "region:us" ]
2023-01-05T03:40:56+00:00
{"source_datasets": ["irds/msmarco-document"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document/trec-dl-hard/fold5`", "viewer": false}
2023-01-05T03:41:01+00:00
1cd1777f44d3df9a74fb6080240bec820de97db7
# Dataset Card for `msmarco-document-v2` The `msmarco-document-v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=11,959,635 This dataset is used by: [`msmarco-document-v2_trec-dl-2019`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019), [`msmarco-document-v2_trec-dl-2019_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019_judged), [`msmarco-document-v2_trec-dl-2020`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020), [`msmarco-document-v2_trec-dl-2020_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/msmarco-document-v2', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'title': ..., 'headings': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:41:07+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2`", "viewer": false}
2023-01-05T03:41:13+00:00
4d85781c48d6e38047f4be4d3c00fc392fd7411f
# Dataset Card for `msmarco-document-v2/trec-dl-2019` The `msmarco-document-v2/trec-dl-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2019). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=13,940 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) This dataset is used by: [`msmarco-document-v2_trec-dl-2019_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document-v2_trec-dl-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2019
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "region:us" ]
2023-01-05T03:41:18+00:00
{"source_datasets": ["irds/msmarco-document-v2"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2019`", "viewer": false}
2023-01-05T03:41:24+00:00
6f3a39d20cbb52b1e7496e201cfc900c668d6ff9
# Dataset Card for `msmarco-document-v2/trec-dl-2019/judged` The `msmarco-document-v2/trec-dl-2019/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2019/judged). # Data This dataset provides: - `queries` (i.e., topics); count=43 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) - For `qrels`, use [`irds/msmarco-document-v2_trec-dl-2019`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2019) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2019_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2019_judged
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "source_datasets:irds/msmarco-document-v2_trec-dl-2019", "region:us" ]
2023-01-05T03:41:29+00:00
{"source_datasets": ["irds/msmarco-document-v2", "irds/msmarco-document-v2_trec-dl-2019"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2019/judged`", "viewer": false}
2023-01-05T03:41:35+00:00
22e10c61d01c6aa08d8c1c4113955ec11a7c23d7
# Dataset Card for `msmarco-document-v2/trec-dl-2020` The `msmarco-document-v2/trec-dl-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2020). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=7,942 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) This dataset is used by: [`msmarco-document-v2_trec-dl-2020_judged`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2020', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/msmarco-document-v2_trec-dl-2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2020
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "region:us" ]
2023-01-05T03:41:40+00:00
{"source_datasets": ["irds/msmarco-document-v2"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2020`", "viewer": false}
2023-01-05T03:41:46+00:00
f9988dba951cb081b53e58c9bd6cc99923ad8be3
# Dataset Card for `msmarco-document-v2/trec-dl-2020/judged` The `msmarco-document-v2/trec-dl-2020/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-document-v2#msmarco-document-v2/trec-dl-2020/judged). # Data This dataset provides: - `queries` (i.e., topics); count=45 - For `docs`, use [`irds/msmarco-document-v2`](https://huggingface.co/datasets/irds/msmarco-document-v2) - For `qrels`, use [`irds/msmarco-document-v2_trec-dl-2020`](https://huggingface.co/datasets/irds/msmarco-document-v2_trec-dl-2020) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/msmarco-document-v2_trec-dl-2020_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Craswell2020TrecDl, title={Overview of the TREC 2020 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos}, booktitle={TREC}, year={2020} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-document-v2_trec-dl-2020_judged
[ "task_categories:text-retrieval", "source_datasets:irds/msmarco-document-v2", "source_datasets:irds/msmarco-document-v2_trec-dl-2020", "region:us" ]
2023-01-05T03:41:51+00:00
{"source_datasets": ["irds/msmarco-document-v2", "irds/msmarco-document-v2_trec-dl-2020"], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-document-v2/trec-dl-2020/judged`", "viewer": false}
2023-01-05T03:41:57+00:00
cf23d065f02c5f0bb0e0aba80ecd9cb97a65aef8
# Dataset Card for `msmarco-qna` The `msmarco-qna` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/msmarco-qna#msmarco-qna). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=9,048,606 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/msmarco-qna', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'url': ..., 'msmarco_passage_id': ..., 'msmarco_document_id': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } ```
irds/msmarco-qna
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:42:02+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`msmarco-qna`", "viewer": false}
2023-01-05T03:42:08+00:00
4832073238f72be3b72607cc7c9d0478cc89d233
# Dataset Card for `neumarco/fa` The `neumarco/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_fa_dev`](https://huggingface.co/datasets/irds/neumarco_fa_dev), [`neumarco_fa_dev_judged`](https://huggingface.co/datasets/irds/neumarco_fa_dev_judged), [`neumarco_fa_dev_small`](https://huggingface.co/datasets/irds/neumarco_fa_dev_small), [`neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train), [`neumarco_fa_train_judged`](https://huggingface.co/datasets/irds/neumarco_fa_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_fa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:42:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa`", "viewer": false}
2023-01-05T03:42:19+00:00
73180c1d1348eed46d5fc81bda6ad2ce4f38066f
# Dataset Card for `neumarco/fa/dev` The `neumarco/fa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) This dataset is used by: [`neumarco_fa_dev_judged`](https://huggingface.co/datasets/irds/neumarco_fa_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:25+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev`", "viewer": false}
2023-01-05T03:42:30+00:00
197de71e9eeb3a74cefa1b0e1ac8813d6139652f
# Dataset Card for `neumarco/fa/dev/judged` The `neumarco/fa/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) - For `qrels`, use [`irds/neumarco_fa_dev`](https://huggingface.co/datasets/irds/neumarco_fa_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "source_datasets:irds/neumarco_fa_dev", "region:us" ]
2023-01-05T03:42:36+00:00
{"source_datasets": ["irds/neumarco_fa", "irds/neumarco_fa_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev/judged`", "viewer": false}
2023-01-05T03:42:41+00:00
4f7e9b08f302936c8f42dff46f4bb3df4c728247
# Dataset Card for `neumarco/fa/dev/small` The `neumarco/fa/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:47+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/dev/small`", "viewer": false}
2023-01-05T03:42:53+00:00
f991550fab0acc9540353de90ae6d46d0950821b
# Dataset Card for `neumarco/fa/train` The `neumarco/fa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) This dataset is used by: [`neumarco_fa_train_judged`](https://huggingface.co/datasets/irds/neumarco_fa_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_fa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_fa_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "region:us" ]
2023-01-05T03:42:58+00:00
{"source_datasets": ["irds/neumarco_fa"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/train`", "viewer": false}
2023-01-05T03:43:04+00:00
772eca36006c63ecc66bf08ef2a88b57fe1df400
# Dataset Card for `neumarco/fa/train/judged` The `neumarco/fa/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/fa/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_fa`](https://huggingface.co/datasets/irds/neumarco_fa) - For `qrels`, use [`irds/neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train) - For `docpairs`, use [`irds/neumarco_fa_train`](https://huggingface.co/datasets/irds/neumarco_fa_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_fa_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_fa_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_fa", "source_datasets:irds/neumarco_fa_train", "region:us" ]
2023-01-05T03:43:09+00:00
{"source_datasets": ["irds/neumarco_fa", "irds/neumarco_fa_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/fa/train/judged`", "viewer": false}
2023-01-05T03:43:15+00:00
dc02a3d5a47b72f1df1dde2ee6ecd3a70b2fe4c1
# Dataset Card for `neumarco/ru` The `neumarco/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_ru_dev`](https://huggingface.co/datasets/irds/neumarco_ru_dev), [`neumarco_ru_dev_judged`](https://huggingface.co/datasets/irds/neumarco_ru_dev_judged), [`neumarco_ru_dev_small`](https://huggingface.co/datasets/irds/neumarco_ru_dev_small), [`neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train), [`neumarco_ru_train_judged`](https://huggingface.co/datasets/irds/neumarco_ru_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_ru', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:43:20+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru`", "viewer": false}
2023-01-05T03:43:26+00:00
247672bb5e728f4abbb4dc77ab13081629186363
# Dataset Card for `neumarco/ru/dev` The `neumarco/ru/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) This dataset is used by: [`neumarco_ru_dev_judged`](https://huggingface.co/datasets/irds/neumarco_ru_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:43:31+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev`", "viewer": false}
2023-01-05T03:43:37+00:00
98bd8ef8f7158f20241c0ad08c018cc608e78048
# Dataset Card for `neumarco/ru/dev/judged` The `neumarco/ru/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) - For `qrels`, use [`irds/neumarco_ru_dev`](https://huggingface.co/datasets/irds/neumarco_ru_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "source_datasets:irds/neumarco_ru_dev", "region:us" ]
2023-01-05T03:43:43+00:00
{"source_datasets": ["irds/neumarco_ru", "irds/neumarco_ru_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev/judged`", "viewer": false}
2023-01-05T03:43:49+00:00
7119b3bce58621e4de0b6d49af778d93ba59cf37
# Dataset Card for `neumarco/ru/dev/small` The `neumarco/ru/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:43:54+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/dev/small`", "viewer": false}
2023-01-05T03:44:00+00:00
8fa65507b46ded9f16109fd04a88540bd89b5da7
# Dataset Card for `neumarco/ru/train` The `neumarco/ru/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) This dataset is used by: [`neumarco_ru_train_judged`](https://huggingface.co/datasets/irds/neumarco_ru_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_ru_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_ru_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "region:us" ]
2023-01-05T03:44:05+00:00
{"source_datasets": ["irds/neumarco_ru"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/train`", "viewer": false}
2023-01-05T03:44:11+00:00
b10efa0c750292faae7121eb7f1185cbdd1b1ef5
# Dataset Card for `neumarco/ru/train/judged` The `neumarco/ru/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru) - For `qrels`, use [`irds/neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train) - For `docpairs`, use [`irds/neumarco_ru_train`](https://huggingface.co/datasets/irds/neumarco_ru_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_ru_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_ru_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_ru", "source_datasets:irds/neumarco_ru_train", "region:us" ]
2023-01-05T03:44:16+00:00
{"source_datasets": ["irds/neumarco_ru", "irds/neumarco_ru_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/ru/train/judged`", "viewer": false}
2023-01-05T03:44:22+00:00
38fa331af462b210a32e3213a34e232cc27e510b
# Dataset Card for `neumarco/zh` The `neumarco/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 This dataset is used by: [`neumarco_zh_dev`](https://huggingface.co/datasets/irds/neumarco_zh_dev), [`neumarco_zh_dev_judged`](https://huggingface.co/datasets/irds/neumarco_zh_dev_judged), [`neumarco_zh_dev_small`](https://huggingface.co/datasets/irds/neumarco_zh_dev_small), [`neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train), [`neumarco_zh_train_judged`](https://huggingface.co/datasets/irds/neumarco_zh_train_judged) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/neumarco_zh', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:44:27+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh`", "viewer": false}
2023-01-05T03:44:33+00:00
142218f829a44fa1a5e0f4d3deef06edfa48b961
# Dataset Card for `neumarco/zh/dev` The `neumarco/zh/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev). # Data This dataset provides: - `queries` (i.e., topics); count=101,093 - `qrels`: (relevance assessments); count=59,273 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) This dataset is used by: [`neumarco_zh_dev_judged`](https://huggingface.co/datasets/irds/neumarco_zh_dev_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:44:38+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev`", "viewer": false}
2023-01-05T03:44:44+00:00
bb88f4f0815d99e2fe6909c1a86b7f1792a1bf82
# Dataset Card for `neumarco/zh/dev/judged` The `neumarco/zh/dev/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev/judged). # Data This dataset provides: - `queries` (i.e., topics); count=55,578 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) - For `qrels`, use [`irds/neumarco_zh_dev`](https://huggingface.co/datasets/irds/neumarco_zh_dev) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "source_datasets:irds/neumarco_zh_dev", "region:us" ]
2023-01-05T03:44:50+00:00
{"source_datasets": ["irds/neumarco_zh", "irds/neumarco_zh_dev"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev/judged`", "viewer": false}
2023-01-05T03:44:55+00:00
a9dfebce7a38ebcbf75dcdd9c5754dbf4ef97433
# Dataset Card for `neumarco/zh/dev/small` The `neumarco/zh/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/dev/small). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_dev_small', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_dev_small', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_dev_small
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:45:01+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/dev/small`", "viewer": false}
2023-01-05T03:45:06+00:00
4e1a0fc268895279a59021cf645af5dce40a9fa5
# Dataset Card for `neumarco/zh/train` The `neumarco/zh/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=269,919,004 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) This dataset is used by: [`neumarco_zh_train_judged`](https://huggingface.co/datasets/irds/neumarco_zh_train_judged) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/neumarco_zh_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/neumarco_zh_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_train
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "region:us" ]
2023-01-05T03:45:12+00:00
{"source_datasets": ["irds/neumarco_zh"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/train`", "viewer": false}
2023-01-05T03:45:18+00:00
2b81e8013a61713e4698b6b189e4c418a66448a8
# Dataset Card for `neumarco/zh/train/judged` The `neumarco/zh/train/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/zh/train/judged). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - For `docs`, use [`irds/neumarco_zh`](https://huggingface.co/datasets/irds/neumarco_zh) - For `qrels`, use [`irds/neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train) - For `docpairs`, use [`irds/neumarco_zh_train`](https://huggingface.co/datasets/irds/neumarco_zh_train) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/neumarco_zh_train_judged', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/neumarco_zh_train_judged
[ "task_categories:text-retrieval", "source_datasets:irds/neumarco_zh", "source_datasets:irds/neumarco_zh_train", "region:us" ]
2023-01-05T03:45:23+00:00
{"source_datasets": ["irds/neumarco_zh", "irds/neumarco_zh_train"], "task_categories": ["text-retrieval"], "pretty_name": "`neumarco/zh/train/judged`", "viewer": false}
2023-01-05T03:45:29+00:00
a93f3f651bffbfc7dccee464c5221d69bb533b9c
# Dataset Card for `nfcorpus` The `nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,371 This dataset is used by: [`nfcorpus_dev`](https://huggingface.co/datasets/irds/nfcorpus_dev), [`nfcorpus_dev_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_dev_nontopic), [`nfcorpus_dev_video`](https://huggingface.co/datasets/irds/nfcorpus_dev_video), [`nfcorpus_test`](https://huggingface.co/datasets/irds/nfcorpus_test), [`nfcorpus_test_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_test_nontopic), [`nfcorpus_test_video`](https://huggingface.co/datasets/irds/nfcorpus_test_video), [`nfcorpus_train`](https://huggingface.co/datasets/irds/nfcorpus_train), [`nfcorpus_train_nontopic`](https://huggingface.co/datasets/irds/nfcorpus_train_nontopic), [`nfcorpus_train_video`](https://huggingface.co/datasets/irds/nfcorpus_train_video) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/nfcorpus', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:45:34+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus`", "viewer": false}
2023-01-05T03:45:40+00:00
e0998dd546ea60e52b86ccb3618ebfd665117ea4
# Dataset Card for `nfcorpus/dev` The `nfcorpus/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev). # Data This dataset provides: - `queries` (i.e., topics); count=325 - `qrels`: (relevance assessments); count=14,589 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:45:45+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev`", "viewer": false}
2023-01-05T03:45:51+00:00
611bd08ccae3987930e7f129b3956da865e56a26
# Dataset Card for `nfcorpus/dev/nontopic` The `nfcorpus/dev/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=144 - `qrels`: (relevance assessments); count=4,353 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_dev_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:45:57+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev/nontopic`", "viewer": false}
2023-01-05T03:46:02+00:00
3636266168c7341424759c3239724d46bc5b9981
# Dataset Card for `nfcorpus/dev/video` The `nfcorpus/dev/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/dev/video). # Data This dataset provides: - `queries` (i.e., topics); count=102 - `qrels`: (relevance assessments); count=3,068 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_dev_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_dev_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_dev_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:08+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/dev/video`", "viewer": false}
2023-01-05T03:46:13+00:00
b2f4a28248d54a5d9d71413246e0be85ef02ebd1
# Dataset Card for `nfcorpus/test` The `nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test). # Data This dataset provides: - `queries` (i.e., topics); count=325 - `qrels`: (relevance assessments); count=15,820 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:19+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test`", "viewer": false}
2023-01-05T03:46:24+00:00
2ddd47a2a3b68cee5d44330302fa7ef51aae9b73
# Dataset Card for `nfcorpus/test/nontopic` The `nfcorpus/test/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=144 - `qrels`: (relevance assessments); count=4,540 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_test_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:30+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test/nontopic`", "viewer": false}
2023-01-05T03:46:36+00:00
6800cec2cc8cf14f941b3a02db775ddc3c93fddd
# Dataset Card for `nfcorpus/test/video` The `nfcorpus/test/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/test/video). # Data This dataset provides: - `queries` (i.e., topics); count=102 - `qrels`: (relevance assessments); count=3,108 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_test_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_test_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_test_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:41+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/test/video`", "viewer": false}
2023-01-05T03:46:47+00:00
0fb021728466330179089ff30a28cb8c8a94495f
# Dataset Card for `nfcorpus/train` The `nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,594 - `qrels`: (relevance assessments); count=139,350 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'all': ...} qrels = load_dataset('irds/nfcorpus_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:46:52+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train`", "viewer": false}
2023-01-05T03:46:58+00:00
83ff1cb44780c1a3b5cbad8963fea0865b49aa83
# Dataset Card for `nfcorpus/train/nontopic` The `nfcorpus/train/nontopic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train/nontopic). # Data This dataset provides: - `queries` (i.e., topics); count=1,141 - `qrels`: (relevance assessments); count=37,383 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train_nontopic', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nfcorpus_train_nontopic', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train_nontopic
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:47:03+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train/nontopic`", "viewer": false}
2023-01-05T03:47:09+00:00
26d34488578a5f7c180cd6948b0ae34aefdc4f2e
# Dataset Card for `nfcorpus/train/video` The `nfcorpus/train/video` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus/train/video). # Data This dataset provides: - `queries` (i.e., topics); count=812 - `qrels`: (relevance assessments); count=27,465 - For `docs`, use [`irds/nfcorpus`](https://huggingface.co/datasets/irds/nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nfcorpus_train_video', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'desc': ...} qrels = load_dataset('irds/nfcorpus_train_video', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } ```
irds/nfcorpus_train_video
[ "task_categories:text-retrieval", "source_datasets:irds/nfcorpus", "region:us" ]
2023-01-05T03:47:15+00:00
{"source_datasets": ["irds/nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`nfcorpus/train/video`", "viewer": false}
2023-01-05T03:47:20+00:00
5afa6a30bf39abc3b8f31f073c7b65df78e0dd24
# Dataset Card for `natural-questions` The `natural-questions` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/natural-questions#natural-questions). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=28,390,850 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/natural-questions', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'html': ..., 'start_byte': ..., 'end_byte': ..., 'start_token': ..., 'end_token': ..., 'document_title': ..., 'document_url': ..., 'parent_doc_id': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } ```
irds/natural-questions
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:47:26+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`natural-questions`", "viewer": false}
2023-01-05T03:47:31+00:00
1edb0703e2cd3fd18745d67a7bd9fd6e3cb8d859
# Dataset Card for `nyt` The `nyt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,864,661 This dataset is used by: [`nyt_trec-core-2017`](https://huggingface.co/datasets/irds/nyt_trec-core-2017), [`nyt_wksup`](https://huggingface.co/datasets/irds/nyt_wksup), [`nyt_wksup_train`](https://huggingface.co/datasets/irds/nyt_wksup_train), [`nyt_wksup_valid`](https://huggingface.co/datasets/irds/nyt_wksup_valid) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/nyt', 'docs') for record in docs: record # {'doc_id': ..., 'headline': ..., 'body': ..., 'source_xml': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:47:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`nyt`", "viewer": false}
2023-01-05T03:47:43+00:00
83569ce3f7de11a59f3f8de32f6bb4512628bce1
# Dataset Card for `nyt/trec-core-2017` The `nyt/trec-core-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/trec-core-2017). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=30,030 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_trec-core-2017', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/nyt_trec-core-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Allan2017TrecCore, author = {James Allan and Donna Harman and Evangelos Kanoulas and Dan Li and Christophe Van Gysel and Ellen Vorhees}, title = {TREC 2017 Common Core Track Overview}, booktitle = {TREC}, year = {2017} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_trec-core-2017
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:47:48+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/trec-core-2017`", "viewer": false}
2023-01-05T03:47:54+00:00
ff3519ef39dd30d0750f2adff16282c3728259f9
# Dataset Card for `nyt/wksup` The `nyt/wksup` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup). # Data This dataset provides: - `queries` (i.e., topics); count=1,864,661 - `qrels`: (relevance assessments); count=1,864,661 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:47:59+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup`", "viewer": false}
2023-01-05T03:48:05+00:00
452d6a5f6e151dc2c22b9ad4eaaae5fafe216ebc
# Dataset Card for `nyt/wksup/train` The `nyt/wksup/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup/train). # Data This dataset provides: - `queries` (i.e., topics); count=1,863,657 - `qrels`: (relevance assessments); count=1,863,657 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup_train
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:48:10+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup/train`", "viewer": false}
2023-01-05T03:48:16+00:00
d45ff1ab82b3028d7637b37d6ccea966be04866d
# Dataset Card for `nyt/wksup/valid` The `nyt/wksup/valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup/valid). # Data This dataset provides: - `queries` (i.e., topics); count=1,004 - `qrels`: (relevance assessments); count=1,004 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_wksup_valid', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/nyt_wksup_valid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{MacAvaney2019Wksup, author = {MacAvaney, Sean and Yates, Andrew and Hui, Kai and Frieder, Ophir}, title = {Content-Based Weak Supervision for Ad-Hoc Re-Ranking}, booktitle = {SIGIR}, year = {2019} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
irds/nyt_wksup_valid
[ "task_categories:text-retrieval", "source_datasets:irds/nyt", "region:us" ]
2023-01-05T03:48:21+00:00
{"source_datasets": ["irds/nyt"], "task_categories": ["text-retrieval"], "pretty_name": "`nyt/wksup/valid`", "viewer": false}
2023-01-05T03:48:27+00:00
9cd1241b86e7da023c9b7c34a5910dc5354ef85d
# Dataset Card for `pmc/v1` The `pmc/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=733,111 This dataset is used by: [`pmc_v1_trec-cds-2014`](https://huggingface.co/datasets/irds/pmc_v1_trec-cds-2014), [`pmc_v1_trec-cds-2015`](https://huggingface.co/datasets/irds/pmc_v1_trec-cds-2015) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/pmc_v1', 'docs') for record in docs: record # {'doc_id': ..., 'journal': ..., 'title': ..., 'abstract': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/pmc_v1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:48:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1`", "viewer": false}
2023-01-05T03:48:38+00:00
fe480013625a42a1742f3c821ea8541eb350d01e
# Dataset Card for `pmc/v1/trec-cds-2014` The `pmc/v1/trec-cds-2014` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1/trec-cds-2014). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,949 - For `docs`, use [`irds/pmc_v1`](https://huggingface.co/datasets/irds/pmc_v1) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v1_trec-cds-2014', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v1_trec-cds-2014', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Simpson2014TrecCds, title={Overview of the TREC 2014 Clinical Decision Support Track}, author={Matthew S. Simpson and Ellen M. Voorhees and William Hersh}, booktitle={TREC}, year={2014} } ```
irds/pmc_v1_trec-cds-2014
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v1", "region:us" ]
2023-01-05T03:48:43+00:00
{"source_datasets": ["irds/pmc_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1/trec-cds-2014`", "viewer": false}
2023-01-05T03:48:49+00:00
4997f50b0c869500fc1e8201a908286c199992c1
# Dataset Card for `pmc/v1/trec-cds-2015` The `pmc/v1/trec-cds-2015` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v1/trec-cds-2015). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,807 - For `docs`, use [`irds/pmc_v1`](https://huggingface.co/datasets/irds/pmc_v1) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v1_trec-cds-2015', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v1_trec-cds-2015', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Roberts2015TrecCds, title={Overview of the TREC 2015 Clinical Decision Support Track}, author={Kirk Roberts and Matthew S. Simpson and Ellen Voorhees and William R. Hersh}, booktitle={TREC}, year={2015} } ```
irds/pmc_v1_trec-cds-2015
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v1", "region:us" ]
2023-01-05T03:48:55+00:00
{"source_datasets": ["irds/pmc_v1"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v1/trec-cds-2015`", "viewer": false}
2023-01-05T03:49:00+00:00
c34de21e098ff1b5732cf35bca07864655db6159
# Dataset Card for `pmc/v2` The `pmc/v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,255,260 This dataset is used by: [`pmc_v2_trec-cds-2016`](https://huggingface.co/datasets/irds/pmc_v2_trec-cds-2016) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/pmc_v2', 'docs') for record in docs: record # {'doc_id': ..., 'journal': ..., 'title': ..., 'abstract': ..., 'body': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format.
irds/pmc_v2
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:06+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v2`", "viewer": false}
2023-01-05T03:49:11+00:00
9ca8ce2ee813d3d2f3eefa4c390077092e9373e7
# Dataset Card for `pmc/v2/trec-cds-2016` The `pmc/v2/trec-cds-2016` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/pmc#pmc/v2/trec-cds-2016). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=37,707 - For `docs`, use [`irds/pmc_v2`](https://huggingface.co/datasets/irds/pmc_v2) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/pmc_v2_trec-cds-2016', 'queries') for record in queries: record # {'query_id': ..., 'type': ..., 'note': ..., 'description': ..., 'summary': ...} qrels = load_dataset('irds/pmc_v2_trec-cds-2016', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Roberts2016TrecCds, title={Overview of the TREC 2016 Clinical Decision Support Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh}, booktitle={TREC}, year={2016} } ```
irds/pmc_v2_trec-cds-2016
[ "task_categories:text-retrieval", "source_datasets:irds/pmc_v2", "region:us" ]
2023-01-05T03:49:17+00:00
{"source_datasets": ["irds/pmc_v2"], "task_categories": ["text-retrieval"], "pretty_name": "`pmc/v2/trec-cds-2016`", "viewer": false}
2023-01-05T03:49:23+00:00
2412561bf98177e9f67d82e07e337723c2a5b67e
# Dataset Card for `argsme/2020-04-01/touche-2020-task-1` The `argsme/2020-04-01/touche-2020-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2020-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,298 This dataset is used by: [`argsme_2020-04-01_touche-2020-task-1_uncorrected`](https://huggingface.co/datasets/irds/argsme_2020-04-01_touche-2020-task-1_uncorrected) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_2020-04-01_touche-2020-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2020-task-1`", "viewer": false}
2023-01-05T03:49:34+00:00
1aac6bf8d4a822d1252d3a72243c8ea1cdd740b3
# Dataset Card for `clueweb12/touche-2020-task-2` The `clueweb12/touche-2020-task-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2020-task-2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=1,783 - For `docs`, use [`irds/clueweb12`](https://huggingface.co/datasets/irds/clueweb12) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_touche-2020-task-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/clueweb12_touche-2020-task-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Braunstain2016Support, author = {Liora Braunstain and Oren Kurland and David Carmel and Idan Szpektor and Anna Shtok}, editor = {Nicola Ferro and Fabio Crestani and Marie{-}Francine Moens and Josiane Mothe and Fabrizio Silvestri and Giorgio Maria Di Nunzio and Claudia Hauff and Gianmaria Silvello}, title = {Supporting Human Answers for Advice-Seeking Questions in {CQA} Sites}, booktitle = {Advances in Information Retrieval - 38th European Conference on {IR} Research, {ECIR} 2016, Padua, Italy, March 20-23, 2016. Proceedings}, series = {Lecture Notes in Computer Science}, volume = {9626}, pages = {129--141}, publisher = {Springer}, year = {2016}, doi = {10.1007/978-3-319-30671-1\_10}, } @inproceedings{Rafalak2014Credibility, author = {Maria Rafalak and Katarzyna Abramczuk and Adam Wierzbicki}, editor = {Chin{-}Wan Chung and Andrei Z. Broder and Kyuseok Shim and Torsten Suel}, title = {Incredible: is (almost) all web content trustworthy? analysis of psychological factors related to website credibility evaluation}, booktitle = {23rd International World Wide Web Conference, {WWW} '14, Seoul, Republic of Korea, April 7-11, 2014, Companion Volume}, pages = {1117--1122}, publisher = {{ACM}}, year = {2014}, doi = {10.1145/2567948.2578997}, } ```
irds/clueweb12_touche-2020-task-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12", "region:us" ]
2023-01-05T03:49:39+00:00
{"source_datasets": ["irds/clueweb12"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2020-task-2`", "viewer": false}
2023-01-05T03:49:45+00:00
6939f0f803e794d67cc83a7f2bc4fac238a7eeb8
# Dataset Card for `argsme/2020-04-01/touche-2021-task-1` The `argsme/2020-04-01/touche-2021-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2021-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=3,711 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_touche-2021-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ...} qrels = load_dataset('irds/argsme_2020-04-01_touche-2021-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2021Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Lukas Gienapp and Maik Fr{\"o}be and Meriem Beloucif and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 12th International Conference of the CLEF Association (CLEF 2021)}, doi = {10.1007/978-3-030-85251-1\_28}, editor = {{K. Sel{\c{c}}uk} Candan and Bogdan Ionescu and Lorraine Goeuriot and Henning M{\"u}ller and Alexis Joly and Maria Maistro and Florina Piroi and Guglielmo Faggioli and Nicola Ferro}, month = sep, pages = {450-467}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bucharest, Romania}, title = {{Overview of Touch{\'e} 2021: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-85251-1_28}, volume = 12880, year = 2021, } ```
irds/argsme_2020-04-01_touche-2021-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:49:51+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2021-task-1`", "viewer": false}
2023-01-05T03:49:56+00:00
a4a3969a7fd79461be22346d0501c0d757f5afcc
# Dataset Card for `clueweb12/touche-2021-task-2` The `clueweb12/touche-2021-task-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2021-task-2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=2,076 - For `docs`, use [`irds/clueweb12`](https://huggingface.co/datasets/irds/clueweb12) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_touche-2021-task-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/clueweb12_touche-2021-task-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2021Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Lukas Gienapp and Maik Fr{\"o}be and Meriem Beloucif and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 12th International Conference of the CLEF Association (CLEF 2021)}, doi = {10.1007/978-3-030-85251-1\_28}, editor = {{K. Sel{\c{c}}uk} Candan and Bogdan Ionescu and Lorraine Goeuriot and Henning M{\"u}ller and Alexis Joly and Maria Maistro and Florina Piroi and Guglielmo Faggioli and Nicola Ferro}, month = sep, pages = {450-467}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bucharest, Romania}, title = {{Overview of Touch{\'e} 2021: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-85251-1_28}, volume = 12880, year = 2021, } ```
irds/clueweb12_touche-2021-task-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12", "region:us" ]
2023-01-05T03:50:02+00:00
{"source_datasets": ["irds/clueweb12"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2021-task-2`", "viewer": false}
2023-01-05T03:50:08+00:00
1a626c69e2f77d913fe16acc8d2d46c1297ca9a2
# Dataset Card for `argsme/2020-04-01/processed/touche-2022-task-1` The `argsme/2020-04-01/processed/touche-2022-task-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/processed/touche-2022-task-1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=6,841 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_2020-04-01_processed_touche-2022-task-1', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_2020-04-01_processed_touche-2022-task-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'quality': ..., 'coherence': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } ```
irds/argsme_2020-04-01_processed_touche-2022-task-1
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:13+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/processed/touche-2022-task-1`", "viewer": false}
2023-01-05T03:50:19+00:00
358763bc770a797441a4b9ecc28294944a3e21a8
# Dataset Card for `touche-image/2022-06-13/touche-2022-task-3` The `touche-image/2022-06-13/touche-2022-task-3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/touche-image#touche-image/2022-06-13/touche-2022-task-3). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=19,821 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/touche-image_2022-06-13_touche-2022-task-3', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/touche-image_2022-06-13_touche-2022-task-3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } @inproceedings{Kiesel2021Image, author = {Johannes Kiesel and Nico Reichenbach and Benno Stein and Martin Potthast}, booktitle = {8th Workshop on Argument Mining (ArgMining 2021) at EMNLP}, doi = {10.18653/v1/2021.argmining-1.4}, editor = {Khalid Al-Khatib and Yufang Hou and Manfred Stede}, month = nov, pages = {36-45}, publisher = {Association for Computational Linguistics}, site = {Punta Cana, Dominican Republic}, title = {{Image Retrieval for Arguments Using Stance-Aware Query Expansion}}, url = {https://aclanthology.org/2021.argmining-1.4/}, year = 2021 } @inproceedings{Dimitrov2021SemEval, author = {Dimitar Dimitrov and Bishr Bin Ali and Shaden Shaar and Firoj Alam and Fabrizio Silvestri and Hamed Firooz and Preslav Nakov and Giovanni Da San Martino}, editor = {Alexis Palmer and Nathan Schneider and Natalie Schluter and Guy Emerson and Aur{\'{e}}lie Herbelot and Xiaodan Zhu}, title = {SemEval-2021 Task 6: Detection of Persuasion Techniques in Texts and Images}, booktitle = {Proceedings of the 15th International Workshop on Semantic Evaluation, SemEval@ACL/IJCNLP 2021, Virtual Event / Bangkok, Thailand, August 5-6, 2021}, pages = {70--98}, publisher = {Association for Computational Linguistics}, year = {2021}, doi = {10.18653/v1/2021.semeval-1.7}, } @inproceedings{Yanai2007Image, author = {Keiji Yanai}, editor = {Carey L. Williamson and Mary Ellen Zurko and Peter F. Patel{-}Schneider and Prashant J. Shenoy}, title = {Image collector {III:} a web image-gathering system with bag-of-keypoints}, booktitle = {Proceedings of the 16th International Conference on World Wide Web, {WWW} 2007, Banff, Alberta, Canada, May 8-12, 2007}, pages = {1295--1296}, publisher = {{ACM}}, year = {2007}, doi = {10.1145/1242572.1242816}, } ```
irds/touche-image_2022-06-13_touche-2022-task-3
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:24+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`touche-image/2022-06-13/touche-2022-task-3`", "viewer": false}
2023-01-05T03:50:30+00:00
6df8436fb6c1dbc6ab79f263347a321e043f6244
# Dataset Card for `argsme/1.0/touche-2020-task-1/uncorrected` The `argsme/1.0/touche-2020-task-1/uncorrected` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/1.0/touche-2020-task-1/uncorrected). # Data This dataset provides: - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,964 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/argsme_1.0_touche-2020-task-1_uncorrected', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/argsme_1.0_touche-2020-task-1_uncorrected', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_1.0_touche-2020-task-1_uncorrected
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:35+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/1.0/touche-2020-task-1/uncorrected`", "viewer": false}
2023-01-05T03:50:41+00:00
5cf1db3fbb635c2b2d177fdfc5ce6b390f8a1222
# Dataset Card for `argsme/2020-04-01/touche-2020-task-1/uncorrected` The `argsme/2020-04-01/touche-2020-task-1/uncorrected` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/argsme#argsme/2020-04-01/touche-2020-task-1/uncorrected). # Data This dataset provides: - `qrels`: (relevance assessments); count=2,298 - For `queries`, use [`irds/argsme_2020-04-01_touche-2020-task-1`](https://huggingface.co/datasets/irds/argsme_2020-04-01_touche-2020-task-1) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/argsme_2020-04-01_touche-2020-task-1_uncorrected', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 11th International Conference of the CLEF Association (CLEF 2020)}, doi = {10.1007/978-3-030-58219-7\_26}, editor = {Avi Arampatzis and Evangelos Kanoulas and Theodora Tsikrika and Stefanos Vrochidis and Hideo Joho and Christina Lioma and Carsten Eickhoff and Aur{\'e}lie N{\'e}v{\'e}ol and Linda Cappellato and Nicola Ferro}, month = sep, pages = {384-395}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Thessaloniki, Greece}, title = {{Overview of Touch{\'e} 2020: Argument Retrieval}}, url = {https://link.springer.com/chapter/10.1007/978-3-030-58219-7_26}, volume = 12260, year = 2020, } @inproceedings{Wachsmuth2017Quality, author = {Henning Wachsmuth and Nona Naderi and Yufang Hou and Yonatan Bilu and Vinodkumar Prabhakaran and Tim Alberdingk Thijm and Graeme Hirst and Benno Stein}, booktitle = {15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)}, editor = {Phil Blunsom and Alexander Koller and Mirella Lapata}, month = apr, pages = {176-187}, site = {Valencia, Spain}, title = {{Computational Argumentation Quality Assessment in Natural Language}}, url = {http://aclweb.org/anthology/E17-1017}, year = 2017 } ```
irds/argsme_2020-04-01_touche-2020-task-1_uncorrected
[ "task_categories:text-retrieval", "source_datasets:irds/argsme_2020-04-01_touche-2020-task-1", "region:us" ]
2023-01-05T03:50:47+00:00
{"source_datasets": ["irds/argsme_2020-04-01_touche-2020-task-1"], "task_categories": ["text-retrieval"], "pretty_name": "`argsme/2020-04-01/touche-2020-task-1/uncorrected`", "viewer": false}
2023-01-05T03:50:52+00:00
e5feb990f58cca9f31dd2ecff574884a3badc30b
# Dataset Card for `clueweb12/touche-2022-task-2/expanded-doc-t5-query` The `clueweb12/touche-2022-task-2/expanded-doc-t5-query` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2022-task-2/expanded-doc-t5-query). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=868,655 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'chatnoir_url': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2022Touche, address = {Berlin Heidelberg New York}, author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)}, editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro}, month = sep, numpages = 29, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Bologna, Italy}, title = {{Overview of Touch{\'e} 2022: Argument Retrieval}}, year = 2022 } ```
irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:50:58+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/touche-2022-task-2/expanded-doc-t5-query`", "viewer": false}
2023-01-05T03:51:03+00:00
d9f57041624d76b852c44aa90b6df8ab9f54b18e
# Dataset Card for `trec-arabic` The `trec-arabic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=383,872 This dataset is used by: [`trec-arabic_ar2001`](https://huggingface.co/datasets/irds/trec-arabic_ar2001), [`trec-arabic_ar2002`](https://huggingface.co/datasets/irds/trec-arabic_ar2002) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-arabic', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:51:09+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic`", "viewer": false}
2023-01-05T03:51:15+00:00
00012ae7b44b7db5b8843da36594af763491e000
# Dataset Card for `trec-arabic/ar2001` The `trec-arabic/ar2001` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2001). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=22,744 - For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-arabic_ar2001', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-arabic_ar2001', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Gey2001Arabic, title={The TREC-2001 Cross-Language Information Retrieval Track: Searching Arabic using English, French or Arabic Queries}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2001} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic_ar2001
[ "task_categories:text-retrieval", "source_datasets:irds/trec-arabic", "region:us" ]
2023-01-05T03:51:20+00:00
{"source_datasets": ["irds/trec-arabic"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic/ar2001`", "viewer": false}
2023-01-05T03:51:26+00:00
edc95002c110b3d7d12da315316d9776a7560045
# Dataset Card for `trec-arabic/ar2002` The `trec-arabic/ar2002` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic/ar2002). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=38,432 - For `docs`, use [`irds/trec-arabic`](https://huggingface.co/datasets/irds/trec-arabic) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-arabic_ar2002', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-arabic_ar2002', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Gey2002Arabic, title={The TREC-2002 Arabic/English CLIR Track}, author={Fredric Gey and Douglas Oard}, booktitle={TREC}, year={2002} } @misc{Graff2001Arabic, title={Arabic Newswire Part 1 LDC2001T55}, author={Graff, David, and Walker, Kevin}, year={2001}, url={https://catalog.ldc.upenn.edu/LDC2001T55}, publisher={Linguistic Data Consortium} } ```
irds/trec-arabic_ar2002
[ "task_categories:text-retrieval", "source_datasets:irds/trec-arabic", "region:us" ]
2023-01-05T03:51:31+00:00
{"source_datasets": ["irds/trec-arabic"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-arabic/ar2002`", "viewer": false}
2023-01-05T03:51:37+00:00
ec1c7240bfdcb708b21cbf57ac7ab43aedf85b0e
# Dataset Card for `trec-mandarin` The `trec-mandarin` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=164,789 This dataset is used by: [`trec-mandarin_trec5`](https://huggingface.co/datasets/irds/trec-mandarin_trec5), [`trec-mandarin_trec6`](https://huggingface.co/datasets/irds/trec-mandarin_trec6) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-mandarin', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:51:42+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin`", "viewer": false}
2023-01-05T03:51:48+00:00
0e61092a7f358452ef2e692b5f5fdd6676c1cd44
# Dataset Card for `trec-mandarin/trec5` The `trec-mandarin/trec5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin/trec5). # Data This dataset provides: - `queries` (i.e., topics); count=28 - `qrels`: (relevance assessments); count=15,588 - For `docs`, use [`irds/trec-mandarin`](https://huggingface.co/datasets/irds/trec-mandarin) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-mandarin_trec5', 'queries') for record in queries: record # {'query_id': ..., 'title_en': ..., 'title_zh': ..., 'description_en': ..., 'description_zh': ..., 'narrative_en': ..., 'narrative_zh': ...} qrels = load_dataset('irds/trec-mandarin_trec5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1997Chinese, title={Spanish and Chinese Document Retrieval in TREC-5}, author={Alan Smeaton and Ross Wilkinson}, booktitle={TREC}, year={1996} } @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin_trec5
[ "task_categories:text-retrieval", "source_datasets:irds/trec-mandarin", "region:us" ]
2023-01-05T03:51:53+00:00
{"source_datasets": ["irds/trec-mandarin"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin/trec5`", "viewer": false}
2023-01-05T03:51:59+00:00
7de9a307829401a6ea93a330cde12ad78cabe19c
# Dataset Card for `trec-mandarin/trec6` The `trec-mandarin/trec6` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-mandarin#trec-mandarin/trec6). # Data This dataset provides: - `queries` (i.e., topics); count=26 - `qrels`: (relevance assessments); count=9,236 - For `docs`, use [`irds/trec-mandarin`](https://huggingface.co/datasets/irds/trec-mandarin) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-mandarin_trec6', 'queries') for record in queries: record # {'query_id': ..., 'title_en': ..., 'title_zh': ..., 'description_en': ..., 'description_zh': ..., 'narrative_en': ..., 'narrative_zh': ...} qrels = load_dataset('irds/trec-mandarin_trec6', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Wilkinson1998Chinese, title={Chinese Document Retrieval at TREC-6}, author={Ross Wilkinson}, booktitle={TREC}, year={1997} } @misc{Rogers2000Mandarin, title={TREC Mandarin LDC2000T52}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T52}, publisher={Linguistic Data Consortium} } ```
irds/trec-mandarin_trec6
[ "task_categories:text-retrieval", "source_datasets:irds/trec-mandarin", "region:us" ]
2023-01-05T03:52:05+00:00
{"source_datasets": ["irds/trec-mandarin"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-mandarin/trec6`", "viewer": false}
2023-01-05T03:52:10+00:00
ff0ab628c396fb46c0d52b11abd1d69b9649306a
# Dataset Card for `trec-spanish` The `trec-spanish` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=120,605 This dataset is used by: [`trec-spanish_trec3`](https://huggingface.co/datasets/irds/trec-spanish_trec3), [`trec-spanish_trec4`](https://huggingface.co/datasets/irds/trec-spanish_trec4) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-spanish', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:52:16+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish`", "viewer": false}
2023-01-05T03:52:21+00:00
e8784514251b0fa3d42c0ed6bfaf65b44df7e57e
# Dataset Card for `trec-spanish/trec3` The `trec-spanish/trec3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish/trec3). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=19,005 - For `docs`, use [`irds/trec-spanish`](https://huggingface.co/datasets/irds/trec-spanish) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-spanish_trec3', 'queries') for record in queries: record # {'query_id': ..., 'title_es': ..., 'title_en': ..., 'description_es': ..., 'description_en': ..., 'narrative_es': ..., 'narrative_en': ...} qrels = load_dataset('irds/trec-spanish_trec3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1994Trec3, title={Overview of the Third Text REtrieval Conference (TREC-3)}, author={Donna Harman}, booktitle={TREC}, year={1994} } @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish_trec3
[ "task_categories:text-retrieval", "source_datasets:irds/trec-spanish", "region:us" ]
2023-01-05T03:52:27+00:00
{"source_datasets": ["irds/trec-spanish"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish/trec3`", "viewer": false}
2023-01-05T03:52:32+00:00
814668fca7e7893cf79944759b8d8b8fd49e7901
# Dataset Card for `trec-spanish/trec4` The `trec-spanish/trec4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-spanish#trec-spanish/trec4). # Data This dataset provides: - `queries` (i.e., topics); count=25 - `qrels`: (relevance assessments); count=13,109 - For `docs`, use [`irds/trec-spanish`](https://huggingface.co/datasets/irds/trec-spanish) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-spanish_trec4', 'queries') for record in queries: record # {'query_id': ..., 'description_es1': ..., 'description_en1': ..., 'description_es2': ..., 'description_en2': ...} qrels = load_dataset('irds/trec-spanish_trec4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Harman1995Trec4, title={Overview of the Fourth Text REtrieval Conference (TREC-4)}, author={Donna Harman}, booktitle={TREC}, year={1995} } @misc{Rogers2000Spanish, title={TREC Spanish LDC2000T51}, author={Rogers, Willie}, year={2000}, url={https://catalog.ldc.upenn.edu/LDC2000T51}, publisher={Linguistic Data Consortium} } ```
irds/trec-spanish_trec4
[ "task_categories:text-retrieval", "source_datasets:irds/trec-spanish", "region:us" ]
2023-01-05T03:52:38+00:00
{"source_datasets": ["irds/trec-spanish"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-spanish/trec4`", "viewer": false}
2023-01-05T03:52:44+00:00
d3da193e0136acae9970ae7ced82ee189061baa2
# Dataset Card for `trec-robust04` The `trec-robust04` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=528,155 - `queries` (i.e., topics); count=250 - `qrels`: (relevance assessments); count=311,410 This dataset is used by: [`trec-robust04_fold1`](https://huggingface.co/datasets/irds/trec-robust04_fold1), [`trec-robust04_fold2`](https://huggingface.co/datasets/irds/trec-robust04_fold2), [`trec-robust04_fold3`](https://huggingface.co/datasets/irds/trec-robust04_fold3), [`trec-robust04_fold4`](https://huggingface.co/datasets/irds/trec-robust04_fold4), [`trec-robust04_fold5`](https://huggingface.co/datasets/irds/trec-robust04_fold5) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/trec-robust04', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} queries = load_dataset('irds/trec-robust04', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/trec-robust04', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } ```
irds/trec-robust04
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:52:49+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04`", "viewer": false}
2023-01-05T03:52:55+00:00
a2bfaab7b44fec23deddd87435ef1be5f2ea614a
# Dataset Card for `trec-robust04/fold1` The `trec-robust04/fold1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold1). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,789 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold1
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:00+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold1`", "viewer": false}
2023-01-05T03:53:06+00:00
bf9cea1bfe668ad3fe0e69199f8b4a840feeec93
# Dataset Card for `trec-robust04/fold2` The `trec-robust04/fold2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold2). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,917 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold2', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold2
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:11+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold2`", "viewer": false}
2023-01-05T03:53:17+00:00
aeeafc3ed22f7f83cc780c71e6b99998148887d1
# Dataset Card for `trec-robust04/fold3` The `trec-robust04/fold3` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold3). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=62,901 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold3', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold3', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold3
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:22+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold3`", "viewer": false}
2023-01-05T03:53:28+00:00
15da11a10c4e23bf503627958668159d68e4c636
# Dataset Card for `trec-robust04/fold4` The `trec-robust04/fold4` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold4). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=57,962 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold4', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold4', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold4
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:34+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold4`", "viewer": false}
2023-01-05T03:53:39+00:00
c07c2da7eb63f7ddd3e06a086ce00d2082783f21
# Dataset Card for `trec-robust04/fold5` The `trec-robust04/fold5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-robust04#trec-robust04/fold5). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=63,841 - For `docs`, use [`irds/trec-robust04`](https://huggingface.co/datasets/irds/trec-robust04) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/trec-robust04_fold5', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/trec-robust04_fold5', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Voorhees2004Robust, title={Overview of the TREC 2004 Robust Retrieval Track}, author={Ellen Voorhees}, booktitle={TREC}, year={2004} } @inproceedings{Huston2014ACO, title={A Comparison of Retrieval Models using Term Dependencies}, author={Samuel Huston and W. Bruce Croft}, booktitle={CIKM}, year={2014} } ```
irds/trec-robust04_fold5
[ "task_categories:text-retrieval", "source_datasets:irds/trec-robust04", "region:us" ]
2023-01-05T03:53:45+00:00
{"source_datasets": ["irds/trec-robust04"], "task_categories": ["text-retrieval"], "pretty_name": "`trec-robust04/fold5`", "viewer": false}
2023-01-05T03:53:50+00:00
f339472c5955a8b8a883754290772621a391fdc5
# Dataset Card for `tripclick` The `tripclick` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,523,878 This dataset is used by: [`tripclick_train`](https://huggingface.co/datasets/irds/tripclick_train), [`tripclick_train_head`](https://huggingface.co/datasets/irds/tripclick_train_head), [`tripclick_train_head_dctr`](https://huggingface.co/datasets/irds/tripclick_train_head_dctr), [`tripclick_train_hofstaetter-triples`](https://huggingface.co/datasets/irds/tripclick_train_hofstaetter-triples), [`tripclick_train_tail`](https://huggingface.co/datasets/irds/tripclick_train_tail), [`tripclick_train_torso`](https://huggingface.co/datasets/irds/tripclick_train_torso), [`tripclick_val_head_dctr`](https://huggingface.co/datasets/irds/tripclick_val_head_dctr) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/tripclick', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'url': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T03:53:56+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick`", "viewer": false}
2023-01-05T03:54:01+00:00
b0b576f085a3ccce49d64f84c9d551f5e069dab4
# Dataset Card for `tripclick/train` The `tripclick/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/tripclick#tripclick/train). # Data This dataset provides: - `queries` (i.e., topics); count=685,649 - `qrels`: (relevance assessments); count=2,705,212 - `docpairs`; count=23,221,224 - For `docs`, use [`irds/tripclick`](https://huggingface.co/datasets/irds/tripclick) This dataset is used by: [`tripclick_train_hofstaetter-triples`](https://huggingface.co/datasets/irds/tripclick_train_hofstaetter-triples) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/tripclick_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/tripclick_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/tripclick_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in πŸ€— Dataset format. ## Citation Information ``` @inproceedings{Rekabsaz2021TripClick, title={TripClick: The Log Files of a Large Health Web Search Engine}, author={Navid Rekabsaz and Oleg Lesota and Markus Schedl and Jon Brassey and Carsten Eickhoff}, year={2021}, booktitle={SIGIR} } ```
irds/tripclick_train
[ "task_categories:text-retrieval", "source_datasets:irds/tripclick", "region:us" ]
2023-01-05T03:54:07+00:00
{"source_datasets": ["irds/tripclick"], "task_categories": ["text-retrieval"], "pretty_name": "`tripclick/train`", "viewer": false}
2023-01-05T03:54:13+00:00