sha
stringlengths
40
40
text
stringlengths
0
13.4M
id
stringlengths
2
117
tags
list
created_at
stringlengths
25
25
metadata
stringlengths
2
31.7M
last_modified
stringlengths
25
25
ba4684a1a6f7d00b82a58925777269bd7ff7f2c5
# Dataset Card for Zinc20 ## Dataset Description - **Homepage:** https://zinc20.docking.org/ - **Paper:** https://pubs.acs.org/doi/10.1021/acs.jcim.0c00675 ### Dataset Summary ZINC is a publicly available database that aggregates commercially available and annotated compounds. ZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search. ZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now. This dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from `smiles` to `seflies` representations. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits The dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages) ### Source Data #### Initial Data Collection and Normalization Initial data was released at https://zinc20.docking.org/. We have downloaded and added a `selfies` field and filtered out all molecules that did not contain molecules that could be converted to `selfies` representations. ### Citation Information @article{Irwin2020, doi = {10.1021/acs.jcim.0c00675}, url = {https://doi.org/10.1021/acs.jcim.0c00675}, year = {2020}, month = oct, publisher = {American Chemical Society ({ACS})}, volume = {60}, number = {12}, pages = {6065--6073}, author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle}, title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery}, journal = {Journal of Chemical Information and Modeling} } ### Contributions This dataset was curated and added by [@zanussbaum](https://github.com/zanussbaum).
zpn/zinc20
[ "size_categories:1B<n<10B", "license:mit", "bio", "selfies", "smiles", "small_molecules", "region:us" ]
2023-01-04T17:32:47+00:00
{"license": "mit", "size_categories": ["1B<n<10B"], "pretty_name": "zinc20", "dataset_info": {"features": [{"name": "selfies", "dtype": "string"}, {"name": "smiles", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 238295712864, "num_examples": 804925861}, {"name": "validation", "num_bytes": 26983481360, "num_examples": 100642661}, {"name": "test", "num_bytes": 29158755632, "num_examples": 101082073}], "download_size": 40061255073, "dataset_size": 294437949856}, "tags": ["bio", "selfies", "smiles", "small_molecules"]}
2023-01-06T02:03:46+00:00
48740648f60741b504ef5ea8d87a634873203479
fgomeza17/Sammy
[ "license:openrail", "region:us" ]
2023-01-04T19:48:29+00:00
{"license": "openrail"}
2023-01-04T19:49:05+00:00
ccfc48a7e02b349c04c506937c014b85945130ee
### Roboflow Dataset Page https://universe.roboflow.com/smoke-detection/smoke100-uwe4t/dataset/4 ### Dataset Labels ``` ['smoke'] ``` ### Citation ``` @misc{ smoke100-uwe4t_dataset, title = { Smoke100 Dataset }, type = { Open Source Dataset }, author = { Smoke Detection }, howpublished = { \\url{ https://universe.roboflow.com/smoke-detection/smoke100-uwe4t } }, url = { https://universe.roboflow.com/smoke-detection/smoke100-uwe4t }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { dec }, note = { visited on 2023-01-02 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.ai on March 17, 2022 at 3:42 PM GMT It includes 21578 images. Smoke are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied.
keremberke/smoke-object-detection
[ "task_categories:object-detection", "roboflow", "region:us" ]
2023-01-04T20:41:37+00:00
{"task_categories": ["object-detection"], "tags": ["roboflow"]}
2023-01-04T20:54:45+00:00
b526fa706ed98817ebf35bf66bf5c27f5174dffc
# Dataset Card for "septuagint" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
epaolinos/septuagint
[ "region:us" ]
2023-01-04T21:31:08+00:00
{"dataset_info": {"features": [{"name": "Book", "dtype": "string"}, {"name": "Chapter", "dtype": "int64"}, {"name": "Verse Number", "dtype": "int64"}, {"name": "Verse Text", "dtype": "string"}, {"name": "Genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9101054, "num_examples": 30568}], "download_size": 3421032, "dataset_size": 9101054}}
2023-01-04T21:31:19+00:00
c62f392e8aea6b2f1ea5a2c12d9bc04b48e50741
XiangPan/CIFAR10.1
[ "license:mit", "region:us" ]
2023-01-04T21:38:30+00:00
{"license": "mit"}
2023-01-04T21:38:30+00:00
e78e05770d11783d5a49429b17f2dc157730a7f3
# Dataset Card for "dreambooth-hackathon-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Amy12zz/dreambooth-hackathon-images
[ "region:us" ]
2023-01-04T22:05:18+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1047395.0, "num_examples": 4}], "download_size": 1047434, "dataset_size": 1047395.0}}
2023-01-04T22:05:25+00:00
557f1c1cbe2c59ed1542709cc1efa2c78fbf4e19
RobertLucian/avatar-10k
[ "license:gpl-3.0", "region:us" ]
2023-01-04T22:12:33+00:00
{"license": "gpl-3.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 771570808.685, "num_examples": 10689}], "download_size": 646236257, "dataset_size": 771570808.685}}
2023-01-04T22:15:52+00:00
ea57ec2a2257d517f92775b6ce1083df76837ee0
# Dataset Card for "processed_sroie_donut_dataset_json2token" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ivelin/processed_sroie_donut_dataset_json2token
[ "region:us" ]
2023-01-05T00:19:04+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 586245601.0, "num_examples": 626}], "download_size": 577293738, "dataset_size": 586245601.0}}
2023-01-05T00:19:38+00:00
b19eec145643c02045f384962d69ab4cb98ed6fb
# Dataset Card for "processed_sroie_donut_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ivelin/processed_sroie_donut_dataset
[ "region:us" ]
2023-01-05T00:28:11+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9243064809, "num_examples": 626}], "download_size": 919646545, "dataset_size": 9243064809}}
2023-01-05T01:01:56+00:00
eda2739be485cc048f79950aa94fa84a62ed4d61
# Dataset Card for "processed_sroie_donut_dataset_train_test_split" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ivelin/processed_sroie_donut_dataset_train_test_split
[ "region:us" ]
2023-01-05T00:36:08+00:00
{"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8312852216.400958, "num_examples": 563}, {"name": "test", "num_bytes": 930212592.5990416, "num_examples": 63}], "download_size": 919833989, "dataset_size": 9243064809.0}}
2023-01-05T01:05:53+00:00
4f585f6a98085f8b05ef4df964f6e93de1ced0c8
Images
ariciano/images
[ "region:us" ]
2023-01-05T00:51:29+00:00
{}
2023-01-05T01:06:49+00:00
d7e75f3b96184f9ecce271d2ea93fedc58164da7
Amala/bill_new
[ "license:unknown", "region:us" ]
2023-01-05T01:40:57+00:00
{"license": "unknown"}
2023-01-05T06:23:17+00:00
e2824b6afcb102d19833d33712b1b6d56c712a9e
# Dataset Card for `antique` The `antique` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=403,666 This dataset is used by: [`antique_test`](https://huggingface.co/datasets/irds/antique_test), [`antique_test_non-offensive`](https://huggingface.co/datasets/irds/antique_test_non-offensive), [`antique_train`](https://huggingface.co/datasets/irds/antique_train), [`antique_train_split200-train`](https://huggingface.co/datasets/irds/antique_train_split200-train), [`antique_train_split200-valid`](https://huggingface.co/datasets/irds/antique_train_split200-valid) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/antique', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T01:47:04+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`antique`", "viewer": false}
2023-01-05T02:43:08+00:00
62a770a02ce76920e093391a25806e14cd3bfd82
# Dataset Card for `antique/test` The `antique/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=6,589 - For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/antique_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/antique_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique_test
[ "task_categories:text-retrieval", "source_datasets:irds/antique", "region:us" ]
2023-01-05T02:18:42+00:00
{"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/test`", "viewer": false}
2023-01-05T02:43:12+00:00
25fec169c670281089ac223682bd521eb0f005fe
# Dataset Card for `antique/test/non-offensive` The `antique/test/non-offensive` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test/non-offensive). # Data This dataset provides: - `queries` (i.e., topics); count=176 - `qrels`: (relevance assessments); count=5,752 - For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/antique_test_non-offensive', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/antique_test_non-offensive', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique_test_non-offensive
[ "task_categories:text-retrieval", "source_datasets:irds/antique", "region:us" ]
2023-01-05T02:18:53+00:00
{"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/test/non-offensive`", "viewer": false}
2023-01-05T02:43:17+00:00
1d244f247e15d199b38ee5b410a63958809fbd02
# Dataset Card for `antique/train` The `antique/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,426 - `qrels`: (relevance assessments); count=27,422 - For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/antique_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/antique_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique_train
[ "task_categories:text-retrieval", "source_datasets:irds/antique", "region:us" ]
2023-01-05T02:19:05+00:00
{"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train`", "viewer": false}
2023-01-05T02:43:21+00:00
dee4e966f8d8f0de94b7d7b627d6a1b83bc5aeec
# Dataset Card for `antique/train/split200-train` The `antique/train/split200-train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-train). # Data This dataset provides: - `queries` (i.e., topics); count=2,226 - `qrels`: (relevance assessments); count=25,229 - For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/antique_train_split200-train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/antique_train_split200-train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique_train_split200-train
[ "task_categories:text-retrieval", "source_datasets:irds/antique", "region:us" ]
2023-01-05T02:19:16+00:00
{"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train/split200-train`", "viewer": false}
2023-01-05T02:43:26+00:00
34cb5fbfb733863d08bc185708ce45b66cc3f088
# Dataset Card for `antique/train/split200-valid` The `antique/train/split200-valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-valid). # Data This dataset provides: - `queries` (i.e., topics); count=200 - `qrels`: (relevance assessments); count=2,193 - For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/antique_train_split200-valid', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/antique_train_split200-valid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hashemi2020Antique, title={ANTIQUE: A Non-Factoid Question Answering Benchmark}, author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft}, booktitle={ECIR}, year={2020} } ```
irds/antique_train_split200-valid
[ "task_categories:text-retrieval", "source_datasets:irds/antique", "region:us" ]
2023-01-05T02:19:27+00:00
{"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train/split200-valid`", "viewer": false}
2023-01-05T02:43:31+00:00
68fc2bf4d093ac0f849236e0c32df90df2489a39
# Dataset Card for `aquaint` The `aquaint` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,033,461 This dataset is used by: [`aquaint_trec-robust-2005`](https://huggingface.co/datasets/irds/aquaint_trec-robust-2005) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/aquaint', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} } ```
irds/aquaint
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:19:38+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`aquaint`", "viewer": false}
2023-01-05T02:44:06+00:00
0156f662bec09957647bddc5faf1f63170f912ab
# Dataset Card for `aquaint/trec-robust-2005` The `aquaint/trec-robust-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint/trec-robust-2005). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=37,798 - For `docs`, use [`irds/aquaint`](https://huggingface.co/datasets/irds/aquaint) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/aquaint_trec-robust-2005', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/aquaint_trec-robust-2005', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Voorhees2005Robust, title={Overview of the TREC 2005 Robust Retrieval Track}, author={Ellen M. Voorhees}, booktitle={TREC}, year={2005} } @misc{Graff2002Aquaint, title={The AQUAINT Corpus of English News Text}, author={David Graff}, year={2002}, url={https://catalog.ldc.upenn.edu/LDC2002T31}, publisher={Linguistic Data Consortium} } ```
irds/aquaint_trec-robust-2005
[ "task_categories:text-retrieval", "source_datasets:irds/aquaint", "region:us" ]
2023-01-05T02:19:49+00:00
{"source_datasets": ["irds/aquaint"], "task_categories": ["text-retrieval"], "pretty_name": "`aquaint/trec-robust-2005`", "viewer": false}
2023-01-05T02:44:10+00:00
e8c6115186533dab575310ae5bd22e45246183a0
# Dataset Card for `beir/arguana` The `beir/arguana` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/arguana). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,674 - `queries` (i.e., topics); count=1,406 - `qrels`: (relevance assessments); count=1,406 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_arguana', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_arguana', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_arguana', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wachsmuth2018Arguana, author = "Wachsmuth, Henning and Syed, Shahbaz and Stein, Benno", title = "Retrieval of the Best Counterargument without Prior Topic Knowledge", booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", year = "2018", publisher = "Association for Computational Linguistics", location = "Melbourne, Australia", pages = "241--251", url = "http://aclweb.org/anthology/P18-1023" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_arguana
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:20:01+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/arguana`", "viewer": false}
2023-01-05T02:44:15+00:00
1003a6b347a616074e510c56b5efb92c2a5003d8
# Dataset Card for `beir/climate-fever` The `beir/climate-fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/climate-fever). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,416,593 - `queries` (i.e., topics); count=1,535 - `qrels`: (relevance assessments); count=4,681 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_climate-fever', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_climate-fever', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_climate-fever', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Diggelmann2020CLIMATEFEVERAD, title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims}, author={T. Diggelmann and Jordan L. Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold}, journal={ArXiv}, year={2020}, volume={abs/2012.00614} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_climate-fever
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:20:12+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/climate-fever`", "viewer": false}
2023-01-05T02:44:20+00:00
17727173215cb2034ee1943eea2cd6125b88f7f6
# Dataset Card for `beir/dbpedia-entity` The `beir/dbpedia-entity` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=4,635,922 - `queries` (i.e., topics); count=467 This dataset is used by: [`beir_dbpedia-entity_dev`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_dev), [`beir_dbpedia-entity_test`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_test) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_dbpedia-entity', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...} queries = load_dataset('irds/beir_dbpedia-entity', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Hasibi2017DBpediaEntityVA, title={DBpedia-Entity v2: A Test Collection for Entity Search}, author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan}, journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2017} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_dbpedia-entity
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:20:23+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity`", "viewer": false}
2023-01-05T02:44:24+00:00
4f7dd1e62b688e00d665ddee9aef5935eb7d8568
# Dataset Card for `beir/dbpedia-entity/dev` The `beir/dbpedia-entity/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/dev). # Data This dataset provides: - `queries` (i.e., topics); count=67 - `qrels`: (relevance assessments); count=5,673 - For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_dbpedia-entity_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_dbpedia-entity_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Hasibi2017DBpediaEntityVA, title={DBpedia-Entity v2: A Test Collection for Entity Search}, author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan}, journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2017} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_dbpedia-entity_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_dbpedia-entity", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:20:34+00:00
{"source_datasets": ["irds/beir_dbpedia-entity"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity/dev`", "viewer": false}
2023-01-05T02:44:29+00:00
46e0094ded9f08ae0454de048c60f70ddf77eb52
# Dataset Card for `beir/dbpedia-entity/test` The `beir/dbpedia-entity/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/test). # Data This dataset provides: - `queries` (i.e., topics); count=400 - `qrels`: (relevance assessments); count=43,515 - For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_dbpedia-entity_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_dbpedia-entity_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Hasibi2017DBpediaEntityVA, title={DBpedia-Entity v2: A Test Collection for Entity Search}, author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan}, journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval}, year={2017} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_dbpedia-entity_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_dbpedia-entity", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:44:34+00:00
{"source_datasets": ["irds/beir_dbpedia-entity"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity/test`", "viewer": false}
2023-01-05T02:44:40+00:00
be5b8c519e4a654fcb4061f99585f37e4bd650e6
# Dataset Card for `beir/fever` The `beir/fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,416,568 - `queries` (i.e., topics); count=123,142 This dataset is used by: [`beir_fever_dev`](https://huggingface.co/datasets/irds/beir_fever_dev), [`beir_fever_test`](https://huggingface.co/datasets/irds/beir_fever_test), [`beir_fever_train`](https://huggingface.co/datasets/irds/beir_fever_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_fever', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_fever', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:44:45+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever`", "viewer": false}
2023-01-05T02:44:51+00:00
85f3a003f2483bb16a920a1e797c285ef3c2dde3
# Dataset Card for `beir/fever/dev` The `beir/fever/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/dev). # Data This dataset provides: - `queries` (i.e., topics); count=6,666 - `qrels`: (relevance assessments); count=8,079 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:44:56+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/dev`", "viewer": false}
2023-01-05T02:45:02+00:00
615f7819eb0498ad701ec109b14d001a2e3c2830
# Dataset Card for `beir/fever/test` The `beir/fever/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/test). # Data This dataset provides: - `queries` (i.e., topics); count=6,666 - `qrels`: (relevance assessments); count=7,937 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:07+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/test`", "viewer": false}
2023-01-05T02:45:13+00:00
a09ed8ea7edeb86eb6c29f4d10858e6a515b8b9d
# Dataset Card for `beir/fever/train` The `beir/fever/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/train). # Data This dataset provides: - `queries` (i.e., topics); count=109,810 - `qrels`: (relevance assessments); count=140,085 - For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fever_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fever_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Thorne2018Fever, title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification", author = "Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit", booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", month = jun, year = "2018", address = "New Orleans, Louisiana", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N18-1074", doi = "10.18653/v1/N18-1074", pages = "809--819" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fever_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fever", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:19+00:00
{"source_datasets": ["irds/beir_fever"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever/train`", "viewer": false}
2023-01-05T02:45:24+00:00
5b07156fcd5f44189a2a7b7f638bac632893d073
# Dataset Card for `beir/fiqa` The `beir/fiqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=57,638 - `queries` (i.e., topics); count=6,648 This dataset is used by: [`beir_fiqa_dev`](https://huggingface.co/datasets/irds/beir_fiqa_dev), [`beir_fiqa_test`](https://huggingface.co/datasets/irds/beir_fiqa_test), [`beir_fiqa_train`](https://huggingface.co/datasets/irds/beir_fiqa_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_fiqa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_fiqa', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa`", "viewer": false}
2023-01-05T02:45:35+00:00
999e0e31ad15ebb0511133c8b9eb1e11f7193984
# Dataset Card for `beir/fiqa/dev` The `beir/fiqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=500 - `qrels`: (relevance assessments); count=1,238 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:41+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/dev`", "viewer": false}
2023-01-05T02:45:47+00:00
a9ea84138dde55ae1149087a2b70d9d1e3ea06ae
# Dataset Card for `beir/fiqa/test` The `beir/fiqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/test). # Data This dataset provides: - `queries` (i.e., topics); count=648 - `qrels`: (relevance assessments); count=1,706 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:45:52+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/test`", "viewer": false}
2023-01-05T02:45:58+00:00
a7fe82fcfc5e2397b7468ade5a236400d1e29f5d
# Dataset Card for `beir/fiqa/train` The `beir/fiqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/train). # Data This dataset provides: - `queries` (i.e., topics); count=5,500 - `qrels`: (relevance assessments); count=14,166 - For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_fiqa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_fiqa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Maia2018Fiqa, title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering}, author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur}, journal={Companion Proceedings of the The Web Conference 2018}, year={2018} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_fiqa_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_fiqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:03+00:00
{"source_datasets": ["irds/beir_fiqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fiqa/train`", "viewer": false}
2023-01-05T02:46:09+00:00
a9d4569fcf0f719c13a00e7d829da989c180c858
# Dataset Card for `beir/hotpotqa` The `beir/hotpotqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,233,329 - `queries` (i.e., topics); count=97,852 This dataset is used by: [`beir_hotpotqa_dev`](https://huggingface.co/datasets/irds/beir_hotpotqa_dev), [`beir_hotpotqa_test`](https://huggingface.co/datasets/irds/beir_hotpotqa_test), [`beir_hotpotqa_train`](https://huggingface.co/datasets/irds/beir_hotpotqa_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_hotpotqa', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...} queries = load_dataset('irds/beir_hotpotqa', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:14+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa`", "viewer": false}
2023-01-05T02:46:20+00:00
843a773f0c01515354a6e4ed92b808fbf0ce5816
# Dataset Card for `beir/hotpotqa/dev` The `beir/hotpotqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/dev). # Data This dataset provides: - `queries` (i.e., topics); count=5,447 - `qrels`: (relevance assessments); count=10,894 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:25+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/dev`", "viewer": false}
2023-01-05T02:46:31+00:00
57bdab89c981562ccfff221830c2a788af40222e
# Dataset Card for `beir/hotpotqa/test` The `beir/hotpotqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/test). # Data This dataset provides: - `queries` (i.e., topics); count=7,405 - `qrels`: (relevance assessments); count=14,810 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:37+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/test`", "viewer": false}
2023-01-05T02:46:42+00:00
cb373247cc67a93292fd84e958f74263c982c7ce
# Dataset Card for `beir/hotpotqa/train` The `beir/hotpotqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/train). # Data This dataset provides: - `queries` (i.e., topics); count=85,000 - `qrels`: (relevance assessments); count=170,000 - For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_hotpotqa_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_hotpotqa_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Yang2018Hotpotqa, title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering", author = "Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William and Salakhutdinov, Ruslan and Manning, Christopher D.", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1259", doi = "10.18653/v1/D18-1259", pages = "2369--2380" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_hotpotqa_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_hotpotqa", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:48+00:00
{"source_datasets": ["irds/beir_hotpotqa"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/hotpotqa/train`", "viewer": false}
2023-01-05T02:46:53+00:00
7ac33c87e5a52e34ac4de6b2fbc9a6be0a332d70
# Dataset Card for `beir/msmarco` The `beir/msmarco` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=8,841,823 - `queries` (i.e., topics); count=509,962 This dataset is used by: [`beir_msmarco_dev`](https://huggingface.co/datasets/irds/beir_msmarco_dev), [`beir_msmarco_test`](https://huggingface.co/datasets/irds/beir_msmarco_test), [`beir_msmarco_train`](https://huggingface.co/datasets/irds/beir_msmarco_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_msmarco', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_msmarco', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:46:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco`", "viewer": false}
2023-01-05T02:47:04+00:00
a2542a6abd16cb12e686638f8378210b862e9e02
# Dataset Card for `beir/msmarco/dev` The `beir/msmarco/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/dev). # Data This dataset provides: - `queries` (i.e., topics); count=6,980 - `qrels`: (relevance assessments); count=7,437 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:10+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/dev`", "viewer": false}
2023-01-05T02:47:16+00:00
db8b6c11d729fe1de71e4ed49b8c6eb7b95202e6
# Dataset Card for `beir/msmarco/test` The `beir/msmarco/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/test). # Data This dataset provides: - `queries` (i.e., topics); count=43 - `qrels`: (relevance assessments); count=9,260 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Craswell2019TrecDl, title={Overview of the TREC 2019 deep learning track}, author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees}, booktitle={TREC 2019}, year={2019} } @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:21+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/test`", "viewer": false}
2023-01-05T02:47:27+00:00
55ed347565ae2664c3163709e13a1b310bd6c437
# Dataset Card for `beir/msmarco/train` The `beir/msmarco/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/train). # Data This dataset provides: - `queries` (i.e., topics); count=502,939 - `qrels`: (relevance assessments); count=532,751 - For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_msmarco_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_msmarco_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bajaj2016Msmarco, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang}, booktitle={InCoCo@NIPS}, year={2016} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_msmarco_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_msmarco", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:32+00:00
{"source_datasets": ["irds/beir_msmarco"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/msmarco/train`", "viewer": false}
2023-01-05T02:47:38+00:00
c28514f2fa050fd2960a90289ed03de27cc58774
# Dataset Card for `beir/nfcorpus` The `beir/nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,633 - `queries` (i.e., topics); count=3,237 This dataset is used by: [`beir_nfcorpus_dev`](https://huggingface.co/datasets/irds/beir_nfcorpus_dev), [`beir_nfcorpus_test`](https://huggingface.co/datasets/irds/beir_nfcorpus_test), [`beir_nfcorpus_train`](https://huggingface.co/datasets/irds/beir_nfcorpus_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_nfcorpus', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...} queries = load_dataset('irds/beir_nfcorpus', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'url': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus`", "viewer": false}
2023-01-05T02:47:49+00:00
0480b338f240acd48ca982972179d794e4f013ba
# Dataset Card for `beir/nfcorpus/dev` The `beir/nfcorpus/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/dev). # Data This dataset provides: - `queries` (i.e., topics); count=324 - `qrels`: (relevance assessments); count=11,385 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:47:54+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/dev`", "viewer": false}
2023-01-05T02:48:00+00:00
a81c6f673c961398e0885421d30a490005764c3c
# Dataset Card for `beir/nfcorpus/test` The `beir/nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/test). # Data This dataset provides: - `queries` (i.e., topics); count=323 - `qrels`: (relevance assessments); count=12,334 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:05+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/test`", "viewer": false}
2023-01-05T02:48:11+00:00
a68af490b17ac326021022490730a55d72f4d7bc
# Dataset Card for `beir/nfcorpus/train` The `beir/nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/train). # Data This dataset provides: - `queries` (i.e., topics); count=2,590 - `qrels`: (relevance assessments); count=110,575 - For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_nfcorpus_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nfcorpus_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Boteva2016Nfcorpus, title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval", author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler", booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})", location = "Padova, Italy", publisher = "Springer", year = 2016 } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nfcorpus_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_nfcorpus", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:17+00:00
{"source_datasets": ["irds/beir_nfcorpus"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nfcorpus/train`", "viewer": false}
2023-01-05T02:48:22+00:00
01d9f2ee90a78404bd6886a8554f4a3d3348cb1a
# Dataset Card for `beir/nq` The `beir/nq` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nq). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=2,681,468 - `queries` (i.e., topics); count=3,452 - `qrels`: (relevance assessments); count=4,201 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_nq', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_nq', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_nq', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Kwiatkowski2019Nq, title = {Natural Questions: a Benchmark for Question Answering Research}, author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov}, year = {2019}, journal = {TACL} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_nq
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/nq`", "viewer": false}
2023-01-05T02:48:33+00:00
f5f4a676494bac329c88d060c18c492e2b68808b
# Dataset Card for `beir/quora` The `beir/quora` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=522,931 - `queries` (i.e., topics); count=15,000 This dataset is used by: [`beir_quora_dev`](https://huggingface.co/datasets/irds/beir_quora_dev), [`beir_quora_test`](https://huggingface.co/datasets/irds/beir_quora_test) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_quora', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/beir_quora', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora`", "viewer": false}
2023-01-05T02:48:44+00:00
669cd6363d77000b48fe48cee3f6379b0688509c
# Dataset Card for `beir/quora/dev` The `beir/quora/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/dev). # Data This dataset provides: - `queries` (i.e., topics); count=5,000 - `qrels`: (relevance assessments); count=7,626 - For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_quora_dev', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_quora_dev', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora_dev
[ "task_categories:text-retrieval", "source_datasets:irds/beir_quora", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:48:50+00:00
{"source_datasets": ["irds/beir_quora"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora/dev`", "viewer": false}
2023-01-05T02:48:56+00:00
15990d64a421a576b8a8feb20d1032667f4e69e3
# Dataset Card for `beir/quora/test` The `beir/quora/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/test). # Data This dataset provides: - `queries` (i.e., topics); count=10,000 - `qrels`: (relevance assessments); count=15,675 - For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_quora_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_quora_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_quora_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_quora", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:01+00:00
{"source_datasets": ["irds/beir_quora"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/quora/test`", "viewer": false}
2023-01-05T02:49:07+00:00
ea1662eca271f598ab4682fb594cca266c5e3fec
# Dataset Card for `beir/scifact` The `beir/scifact` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=5,183 - `queries` (i.e., topics); count=1,109 This dataset is used by: [`beir_scifact_test`](https://huggingface.co/datasets/irds/beir_scifact_test), [`beir_scifact_train`](https://huggingface.co/datasets/irds/beir_scifact_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_scifact', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ...} queries = load_dataset('irds/beir_scifact', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:12+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact`", "viewer": false}
2023-01-05T02:49:18+00:00
d1088c90e1b9c36d4d39175293c66afe76687927
# Dataset Card for `beir/scifact/test` The `beir/scifact/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/test). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=339 - For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_scifact_test', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_scifact_test', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact_test
[ "task_categories:text-retrieval", "source_datasets:irds/beir_scifact", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:23+00:00
{"source_datasets": ["irds/beir_scifact"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact/test`", "viewer": false}
2023-01-05T02:49:29+00:00
992069af405943938783272ba56318c11ecf547f
# Dataset Card for `beir/scifact/train` The `beir/scifact/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/train). # Data This dataset provides: - `queries` (i.e., topics); count=809 - `qrels`: (relevance assessments); count=919 - For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/beir_scifact_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/beir_scifact_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Wadden2020Scifact, title = "Fact or Fiction: Verifying Scientific Claims", author = "Wadden, David and Lin, Shanchuan and Lo, Kyle and Wang, Lucy Lu and van Zuylen, Madeleine and Cohan, Arman and Hajishirzi, Hannaneh", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.609", doi = "10.18653/v1/2020.emnlp-main.609", pages = "7534--7550" } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_scifact_train
[ "task_categories:text-retrieval", "source_datasets:irds/beir_scifact", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:35+00:00
{"source_datasets": ["irds/beir_scifact"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/scifact/train`", "viewer": false}
2023-01-05T02:49:40+00:00
38c95aff68694ebdaaca5268c37fe8c0c4af76ff
# Dataset Card for `beir/trec-covid` The `beir/trec-covid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/trec-covid). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=171,332 - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=66,336 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_trec-covid', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ..., 'pubmed_id': ...} queries = load_dataset('irds/beir_trec-covid', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'query': ..., 'narrative': ...} qrels = load_dataset('irds/beir_trec-covid', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Wang2020Cord19, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } @article{Voorhees2020TrecCovid, title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection}, author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang}, journal={ArXiv}, year={2020}, volume={abs/2005.04474} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_trec-covid
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:46+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/trec-covid`", "viewer": false}
2023-01-05T02:49:51+00:00
965174667841438e58ccfed2e8760ee4ec0aabd3
# Dataset Card for `beir/webis-touche2020` The `beir/webis-touche2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=382,545 - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,962 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_webis-touche2020', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...} queries = load_dataset('irds/beir_webis-touche2020', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/beir_webis-touche2020', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Tuche, title={Overview of Touch{\'e} 2020: Argument Retrieval}, author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle={CLEF}, year={2020} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_webis-touche2020
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:49:57+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/webis-touche2020`", "viewer": false}
2023-01-05T02:50:02+00:00
285d3592952490f2051d0c9f56b9eca74746ec7d
# Dataset Card for `beir/webis-touche2020/v2` The `beir/webis-touche2020/v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020/v2). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=382,545 - `queries` (i.e., topics); count=49 - `qrels`: (relevance assessments); count=2,214 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/beir_webis-touche2020_v2', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...} queries = load_dataset('irds/beir_webis-touche2020_v2', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/beir_webis-touche2020_v2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Bondarenko2020Tuche, title={Overview of Touch{\'e} 2020: Argument Retrieval}, author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen}, booktitle={CLEF}, year={2020} } @article{Thakur2021Beir, title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.08663", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.08663", } ```
irds/beir_webis-touche2020_v2
[ "task_categories:text-retrieval", "arxiv:2104.08663", "region:us" ]
2023-01-05T02:50:08+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/webis-touche2020/v2`", "viewer": false}
2023-01-05T02:50:14+00:00
b9e4834c6e4a249fa558055e85ccecd578f92681
# Dataset Card for `c4/en-noclean-tr` The `c4/en-noclean-tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,063,805,381 This dataset is used by: [`c4_en-noclean-tr_trec-misinfo-2021`](https://huggingface.co/datasets/irds/c4_en-noclean-tr_trec-misinfo-2021) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/c4_en-noclean-tr', 'docs') for record in docs: record # {'doc_id': ..., 'text': ..., 'url': ..., 'timestamp': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/c4_en-noclean-tr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:50:19+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`c4/en-noclean-tr`", "viewer": false}
2023-01-05T02:50:25+00:00
7953f361b9c65497915f8d74b8dc41aa70c628f9
# Dataset Card for `c4/en-noclean-tr/trec-misinfo-2021` The `c4/en-noclean-tr/trec-misinfo-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr/trec-misinfo-2021). # Data This dataset provides: - `queries` (i.e., topics); count=50 - For `docs`, use [`irds/c4_en-noclean-tr`](https://huggingface.co/datasets/irds/c4_en-noclean-tr) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/c4_en-noclean-tr_trec-misinfo-2021', 'queries') for record in queries: record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ..., 'disclaimer': ..., 'stance': ..., 'evidence': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/c4_en-noclean-tr_trec-misinfo-2021
[ "task_categories:text-retrieval", "source_datasets:irds/c4_en-noclean-tr", "region:us" ]
2023-01-05T02:50:30+00:00
{"source_datasets": ["irds/c4_en-noclean-tr"], "task_categories": ["text-retrieval"], "pretty_name": "`c4/en-noclean-tr/trec-misinfo-2021`", "viewer": false}
2023-01-05T02:50:36+00:00
7bab217764d968c16a26944abb7cc44985413e83
# Dataset Card for `car/v1.5` The `car/v1.5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,678,367 This dataset is used by: [`car_v1.5_trec-y1_auto`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_auto), [`car_v1.5_trec-y1_manual`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_manual) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/car_v1.5', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:50:41+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5`", "viewer": false}
2023-01-05T02:50:47+00:00
7cd52fe0fd77dee77f590679270efc5e7f318772
# Dataset Card for `car/v1.5/trec-y1/auto` The `car/v1.5/trec-y1/auto` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/auto). # Data This dataset provides: - `qrels`: (relevance assessments); count=5,820 - For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/car_v1.5_trec-y1_auto', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5_trec-y1_auto
[ "task_categories:text-retrieval", "source_datasets:irds/car_v1.5", "region:us" ]
2023-01-05T02:50:52+00:00
{"source_datasets": ["irds/car_v1.5"], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5/trec-y1/auto`", "viewer": false}
2023-01-05T02:50:58+00:00
2daa98372a4412614902f0b178cf68cbe62b016b
# Dataset Card for `car/v1.5/trec-y1/manual` The `car/v1.5/trec-y1/manual` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/manual). # Data This dataset provides: - `qrels`: (relevance assessments); count=29,571 - For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5) ## Usage ```python from datasets import load_dataset qrels = load_dataset('irds/car_v1.5_trec-y1_manual', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Dietz2017TrecCar, title={TREC Complex Answer Retrieval Overview.}, author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick}, booktitle={TREC}, year={2017} } @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v1.5_trec-y1_manual
[ "task_categories:text-retrieval", "source_datasets:irds/car_v1.5", "region:us" ]
2023-01-05T02:51:03+00:00
{"source_datasets": ["irds/car_v1.5"], "task_categories": ["text-retrieval"], "pretty_name": "`car/v1.5/trec-y1/manual`", "viewer": false}
2023-01-05T02:51:09+00:00
7c27b92a62438e66c3f826b0f4c47feda87273e1
# Dataset Card for `car/v2.0` The `car/v2.0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v2.0). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,794,697 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/car_v2.0', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Dietz2017Car, title={{TREC CAR}: A Data Set for Complex Answer Retrieval}, author={Laura Dietz and Ben Gamari}, year={2017}, note={Version 1.5}, url={http://trec-car.cs.unh.edu} } ```
irds/car_v2.0
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:15+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`car/v2.0`", "viewer": false}
2023-01-05T02:51:21+00:00
f45cb0e6a107cf67791746772c5dfee581277137
# Dataset Card for `highwire/trec-genomics-2006` The `highwire/trec-genomics-2006` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2006). # Data This dataset provides: - `queries` (i.e., topics); count=28 - `qrels`: (relevance assessments); count=27,999 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/highwire_trec-genomics-2006', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/highwire_trec-genomics-2006', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2006TrecGenomics, title={TREC 2006 Genomics Track Overview}, author={William Hersh and Aaron M. Cohen and Phoebe Roberts and Hari Krishna Rekapalli}, booktitle={TREC}, year={2006} } ```
irds/highwire_trec-genomics-2006
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:26+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`highwire/trec-genomics-2006`", "viewer": false}
2023-01-05T02:51:32+00:00
b7fa4a63416f551e005def97161bbeb1650a0d7a
# Dataset Card for `highwire/trec-genomics-2007` The `highwire/trec-genomics-2007` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2007). # Data This dataset provides: - `queries` (i.e., topics); count=36 - `qrels`: (relevance assessments); count=35,996 ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/highwire_trec-genomics-2007', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/highwire_trec-genomics-2007', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2007TrecGenomics, title={TREC 2007 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Lynn Ruslen and Phoebe Roberts}, booktitle={TREC}, year={2007} } ```
irds/highwire_trec-genomics-2007
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`highwire/trec-genomics-2007`", "viewer": false}
2023-01-05T02:51:43+00:00
2e188da49f6876634790bef7ec0bf9bae554e704
# Dataset Card for `medline/2004` The `medline/2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=3,672,808 This dataset is used by: [`medline_2004_trec-genomics-2004`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2004), [`medline_2004_trec-genomics-2005`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2005) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/medline_2004', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/medline_2004
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:51:48+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004`", "viewer": false}
2023-01-05T02:51:54+00:00
65b12b3859a0e689d5e74e989f11b8d3abc43b8b
# Dataset Card for `medline/2004/trec-genomics-2004` The `medline/2004/trec-genomics-2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2004). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=8,268 - For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2004_trec-genomics-2004', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'need': ..., 'context': ...} qrels = load_dataset('irds/medline_2004_trec-genomics-2004', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2004TrecGenomics, title={TREC 2004 Genomics Track Overview}, author={William R. Hersh and Ravi Teja Bhuptiraju and Laura Ross and Phoebe Johnson and Aaron M. Cohen and Dale F. Kraemer}, booktitle={TREC}, year={2004} } ```
irds/medline_2004_trec-genomics-2004
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2004", "region:us" ]
2023-01-05T02:52:00+00:00
{"source_datasets": ["irds/medline_2004"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004/trec-genomics-2004`", "viewer": false}
2023-01-05T02:52:05+00:00
199f8f87b8d80b9728773f1fef2020d77b7a8dfb
# Dataset Card for `medline/2004/trec-genomics-2005` The `medline/2004/trec-genomics-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2005). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=39,958 - For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2004_trec-genomics-2005', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/medline_2004_trec-genomics-2005', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Hersh2005TrecGenomics, title={TREC 2005 Genomics Track Overview}, author={William Hersh and Aaron Cohen and Jianji Yang and Ravi Teja Bhupatiraju and Phoebe Roberts and Marti Hearst}, booktitle={TREC}, year={2007} } ```
irds/medline_2004_trec-genomics-2005
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2004", "region:us" ]
2023-01-05T02:52:11+00:00
{"source_datasets": ["irds/medline_2004"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2004/trec-genomics-2005`", "viewer": false}
2023-01-05T02:52:16+00:00
e078b41418b36287b42d57a58a768294482516ea
# Dataset Card for `medline/2017` The `medline/2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=26,740,025 This dataset is used by: [`medline_2017_trec-pm-2017`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2017), [`medline_2017_trec-pm-2018`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2018) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/medline_2017', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'abstract': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/medline_2017
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:52:22+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017`", "viewer": false}
2023-01-05T02:52:28+00:00
d39c00d1f99bfe13b9469976c0413e980f2461a7
# Dataset Card for `medline/2017/trec-pm-2017` The `medline/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2017). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=22,642 - For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2017_trec-pm-2017', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...} qrels = load_dataset('irds/medline_2017_trec-pm-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} } ```
irds/medline_2017_trec-pm-2017
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2017", "region:us" ]
2023-01-05T02:52:33+00:00
{"source_datasets": ["irds/medline_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017/trec-pm-2017`", "viewer": false}
2023-01-05T02:52:39+00:00
45ad7d885b6393bc9706bd1e4407859f2bada08f
# Dataset Card for `medline/2017/trec-pm-2018` The `medline/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=22,429 - For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/medline_2017_trec-pm-2018', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/medline_2017_trec-pm-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} } ```
irds/medline_2017_trec-pm-2018
[ "task_categories:text-retrieval", "source_datasets:irds/medline_2017", "region:us" ]
2023-01-05T02:52:44+00:00
{"source_datasets": ["irds/medline_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`medline/2017/trec-pm-2018`", "viewer": false}
2023-01-05T02:52:50+00:00
e534f88bc693fe3fab9457519d1e7790614bc781
# Dataset Card for `clinicaltrials/2017` The `clinicaltrials/2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=241,006 This dataset is used by: [`clinicaltrials_2017_trec-pm-2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017_trec-pm-2017), [`clinicaltrials_2017_trec-pm-2018`](https://huggingface.co/datasets/irds/clinicaltrials_2017_trec-pm-2018) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2017', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2017
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:52:55+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017`", "viewer": false}
2023-01-05T02:53:01+00:00
859236df2ccb1dddf5b47e4db41c0613d95d52c3
# Dataset Card for `clinicaltrials/2017/trec-pm-2017` The `clinicaltrials/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2017). # Data This dataset provides: - `queries` (i.e., topics); count=30 - `qrels`: (relevance assessments); count=13,019 - For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...} qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2017TrecPm, title={Overview of the TREC 2017 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant}, booktitle={TREC}, year={2017} } ```
irds/clinicaltrials_2017_trec-pm-2017
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2017", "region:us" ]
2023-01-05T02:53:06+00:00
{"source_datasets": ["irds/clinicaltrials_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017/trec-pm-2017`", "viewer": false}
2023-01-05T02:53:13+00:00
f33c2c94bae6745e8db596e5a7f5151a24b341bf
# Dataset Card for `clinicaltrials/2017/trec-pm-2018` The `clinicaltrials/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2018). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=14,188 - For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2018TrecPm, title={Overview of the TREC 2018 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar}, booktitle={TREC}, year={2018} } ```
irds/clinicaltrials_2017_trec-pm-2018
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2017", "region:us" ]
2023-01-05T02:53:19+00:00
{"source_datasets": ["irds/clinicaltrials_2017"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2017/trec-pm-2018`", "viewer": false}
2023-01-05T02:53:24+00:00
b6a34cc1fc325fbc3ea0b16c0b34dabd64cb8a41
# Dataset Card for `clinicaltrials/2019` The `clinicaltrials/2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=306,238 This dataset is used by: [`clinicaltrials_2019_trec-pm-2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019_trec-pm-2019) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2019', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2019
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:53:30+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2019`", "viewer": false}
2023-01-05T02:53:35+00:00
bed5f92f0ff72e0c08ff8f6f42c2dc3947f7c208
# Dataset Card for `clinicaltrials/2019/trec-pm-2019` The `clinicaltrials/2019/trec-pm-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019/trec-pm-2019). # Data This dataset provides: - `queries` (i.e., topics); count=40 - `qrels`: (relevance assessments); count=12,996 - For `docs`, use [`irds/clinicaltrials_2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'queries') for record in queries: record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...} qrels = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Roberts2019TrecPm, title={Overview of the TREC 2019 Precision Medicine Track}, author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant and Funda Meric-Bernstam}, booktitle={TREC}, year={2019} } ```
irds/clinicaltrials_2019_trec-pm-2019
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2019", "region:us" ]
2023-01-05T02:53:41+00:00
{"source_datasets": ["irds/clinicaltrials_2019"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2019/trec-pm-2019`", "viewer": false}
2023-01-05T02:53:47+00:00
35f26ab1d47913a5224c280cb37a9920ae195ab5
# Dataset Card for `clinicaltrials/2021` The `clinicaltrials/2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=375,580 This dataset is used by: [`clinicaltrials_2021_trec-ct-2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2021), [`clinicaltrials_2021_trec-ct-2022`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2022) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clinicaltrials_2021', 'docs') for record in docs: record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:53:52+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021`", "viewer": false}
2023-01-05T02:53:58+00:00
f742aaa8027bcb04ad78516dabb2ca9a9d827c8d
# Dataset Card for `clinicaltrials/2021/trec-ct-2021` The `clinicaltrials/2021/trec-ct-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2021). # Data This dataset provides: - `queries` (i.e., topics); count=75 - `qrels`: (relevance assessments); count=35,832 - For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021_trec-ct-2021
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2021", "region:us" ]
2023-01-05T02:54:03+00:00
{"source_datasets": ["irds/clinicaltrials_2021"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021/trec-ct-2021`", "viewer": false}
2023-01-05T02:54:09+00:00
281328b8e7dd6c07a712f26cb7733a616a0e13b2
# Dataset Card for `clinicaltrials/2021/trec-ct-2022` The `clinicaltrials/2021/trec-ct-2022` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2022). # Data This dataset provides: - `queries` (i.e., topics); count=50 - For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2022', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clinicaltrials_2021_trec-ct-2022
[ "task_categories:text-retrieval", "source_datasets:irds/clinicaltrials_2021", "region:us" ]
2023-01-05T02:54:14+00:00
{"source_datasets": ["irds/clinicaltrials_2021"], "task_categories": ["text-retrieval"], "pretty_name": "`clinicaltrials/2021/trec-ct-2022`", "viewer": false}
2023-01-05T02:54:20+00:00
9ef4fbe912ffaa61a5935000fb3814acc41ec3b9
# Dataset Card for `clueweb09` The `clueweb09` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,040,859,705 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:25+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09`", "viewer": false}
2023-01-05T02:54:31+00:00
86191503ff0c7ec98d9208a5e5c55414fa23698f
# Dataset Card for `clueweb09/ar` The `clueweb09/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ar). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=29,192,662 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ar', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ar
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:37+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ar`", "viewer": false}
2023-01-05T02:54:42+00:00
8ca4546e4d972f051bad67e7db0ff248fa9b7b88
# Dataset Card for `clueweb09/catb` The `clueweb09/catb` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/catb). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=50,220,423 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_catb', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_catb
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:48+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/catb`", "viewer": false}
2023-01-05T02:54:53+00:00
0b8dc1e0b991a14b27da0abd1c3bd23ffffa1422
# Dataset Card for `clueweb09/de` The `clueweb09/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/de). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=49,814,309 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_de', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_de
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:54:59+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/de`", "viewer": false}
2023-01-05T02:55:04+00:00
2245bbcf914a04c301f534b5e0bd472dce834876
# Dataset Card for `clueweb09/en` The `clueweb09/en` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/en). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=503,903,810 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_en', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_en
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:10+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/en`", "viewer": false}
2023-01-05T02:55:16+00:00
3cf3e9a459381b013d5179f4405a5b95b5f94f86
# Dataset Card for `clueweb09/es` The `clueweb09/es` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/es). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=79,333,950 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_es', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_es
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:21+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/es`", "viewer": false}
2023-01-05T02:55:27+00:00
0787bf65542fbd38df9bebf2a67835e2e6732fa2
# Dataset Card for `clueweb09/fr` The `clueweb09/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/fr). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=50,883,172 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_fr', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_fr
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:32+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/fr`", "viewer": false}
2023-01-05T02:55:38+00:00
50713902b4395333f7958e3065a30f1523887a96
# Dataset Card for `clueweb09/it` The `clueweb09/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/it). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=27,250,729 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_it', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_it
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:43+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/it`", "viewer": false}
2023-01-05T02:55:49+00:00
5012e975cecb674f0a3ac6d86b102f837217b243
# Dataset Card for `clueweb09/ja` The `clueweb09/ja` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ja). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=67,337,717 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ja', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ja
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:55:54+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ja`", "viewer": false}
2023-01-05T02:56:00+00:00
32bc7d27a661d23ee5fc7d6a2e0b4b7325f58a96
# Dataset Card for `clueweb09/ko` The `clueweb09/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ko). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=18,075,141 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_ko', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_ko
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:06+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/ko`", "viewer": false}
2023-01-05T02:56:11+00:00
2e14354e1e75d6d135a7bb87cf1e687405c70f25
# Dataset Card for `clueweb09/pt` The `clueweb09/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=37,578,858 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_pt', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_pt
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:17+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/pt`", "viewer": false}
2023-01-05T02:56:22+00:00
486ef41199f3503cc6fac9d6ac839d6aa37561ba
# Dataset Card for `clueweb09/zh` The `clueweb09/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/zh). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=177,489,357 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_zh', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb09_zh
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:28+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb09/zh`", "viewer": false}
2023-01-05T02:56:34+00:00
d6ab58367a0f5348cf9220927df6c318a7e8dbef
# Dataset Card for `clueweb12` The `clueweb12` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=733,019,372 This dataset is used by: [`clueweb12_touche-2020-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2020-task-2), [`clueweb12_touche-2021-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2021-task-2) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb12
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:39+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12`", "viewer": false}
2023-01-05T02:56:45+00:00
02c36d5a67b664d56ed8c1ce5a7e30167c2ad2c2
# Dataset Card for `clueweb12/b13` The `clueweb12/b13` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=52,343,021 This dataset is used by: [`clueweb12_b13_clef-ehealth`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth), [`clueweb12_b13_clef-ehealth_cs`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_cs), [`clueweb12_b13_clef-ehealth_de`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_de), [`clueweb12_b13_clef-ehealth_fr`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_fr), [`clueweb12_b13_clef-ehealth_hu`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_hu), [`clueweb12_b13_clef-ehealth_pl`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_pl), [`clueweb12_b13_clef-ehealth_sv`](https://huggingface.co/datasets/irds/clueweb12_b13_clef-ehealth_sv), [`clueweb12_b13_ntcir-www-1`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-1), [`clueweb12_b13_ntcir-www-2`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-2), [`clueweb12_b13_ntcir-www-3`](https://huggingface.co/datasets/irds/clueweb12_b13_ntcir-www-3), [`clueweb12_b13_trec-misinfo-2019`](https://huggingface.co/datasets/irds/clueweb12_b13_trec-misinfo-2019) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12_b13', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
irds/clueweb12_b13
[ "task_categories:text-retrieval", "region:us" ]
2023-01-05T02:56:50+00:00
{"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13`", "viewer": false}
2023-01-05T02:56:56+00:00
b084fbb52a06c66dde5d7443c7809adb87fb3be0
# Dataset Card for `clueweb12/b13/clef-ehealth` The `clueweb12/b13/clef-ehealth` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:01+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth`", "viewer": false}
2023-01-05T02:57:07+00:00
5984a4a5a7754911e0a8e133634142117dd084b9
# Dataset Card for `clueweb12/b13/clef-ehealth/cs` The `clueweb12/b13/clef-ehealth/cs` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/cs). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_cs', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_cs', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_cs
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:12+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/cs`", "viewer": false}
2023-01-05T02:57:18+00:00
23968856269a1c839348b013251f42bf21ba5366
# Dataset Card for `clueweb12/b13/clef-ehealth/de` The `clueweb12/b13/clef-ehealth/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/de). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_de', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_de', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_de
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:23+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/de`", "viewer": false}
2023-01-05T02:57:29+00:00
73be0c1c975b1671acb7dfd7f350953170460fcd
# Dataset Card for `clueweb12/b13/clef-ehealth/fr` The `clueweb12/b13/clef-ehealth/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/fr). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_fr', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_fr', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_fr
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:34+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/fr`", "viewer": false}
2023-01-05T02:57:40+00:00
c976f7948099db1858550574cf508185c6d5702c
# Dataset Card for `clueweb12/b13/clef-ehealth/hu` The `clueweb12/b13/clef-ehealth/hu` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/hu). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_hu', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_hu', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_hu
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:46+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/hu`", "viewer": false}
2023-01-05T02:57:51+00:00
c62848301211e275f01c87396dc445114ec9610d
# Dataset Card for `clueweb12/b13/clef-ehealth/pl` The `clueweb12/b13/clef-ehealth/pl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/pl). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_pl', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_pl', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_pl
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:57:57+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/pl`", "viewer": false}
2023-01-05T02:58:02+00:00
1348d2eee351a2d03b2da067b7ce83e46cbeb4a9
# Dataset Card for `clueweb12/b13/clef-ehealth/sv` The `clueweb12/b13/clef-ehealth/sv` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/clef-ehealth/sv). # Data This dataset provides: - `queries` (i.e., topics); count=300 - `qrels`: (relevance assessments); count=269,232 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_clef-ehealth_sv', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_clef-ehealth_sv', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'trustworthiness': ..., 'understandability': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Zuccon2016ClefEhealth, title={The IR Task at the CLEF eHealth Evaluation Lab 2016: User-centred Health Information Retrieval}, author={Guido Zuccon and Joao Palotti and Lorraine Goeuriot and Liadh Kelly and Mihai Lupu and Pavel Pecina and Henning M{\"u}ller and Julie Budaher and Anthony Deacon}, booktitle={CLEF}, year={2016} } @inproceedings{Palotti2017ClefEhealth, title={CLEF 2017 Task Overview: The IR Task at the eHealth Evaluation Lab - Evaluating Retrieval Methods for Consumer Health Search}, author={Joao Palotti and Guido Zuccon and Jimmy and Pavel Pecina and Mihai Lupu and Lorraine Goeuriot and Liadh Kelly and Allan Hanbury}, booktitle={CLEF}, year={2017} } ```
irds/clueweb12_b13_clef-ehealth_sv
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:08+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/clef-ehealth/sv`", "viewer": false}
2023-01-05T02:58:14+00:00
42bbc0b98d162eaac2a7a4250b3691923660e5e8
# Dataset Card for `clueweb12/b13/ntcir-www-1` The `clueweb12/b13/ntcir-www-1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/ntcir-www-1). # Data This dataset provides: - `queries` (i.e., topics); count=100 - `qrels`: (relevance assessments); count=25,465 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_ntcir-www-1', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/clueweb12_b13_ntcir-www-1', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Luo2017Www1, title={Overview of the NTCIR-13 We Want Web Task}, author={Cheng Luo and Tetsuya Sakai and Yiqun Liu and Zhicheng Dou and Chenyan Xiong and Jingfang Xu}, booktitle={NTCIR}, year={2017} } ```
irds/clueweb12_b13_ntcir-www-1
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:19+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/ntcir-www-1`", "viewer": false}
2023-01-05T02:58:25+00:00
a570c1a88cfafb5ee9dbe9fb0e06171bafc771fe
# Dataset Card for `clueweb12/b13/ntcir-www-2` The `clueweb12/b13/ntcir-www-2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/b13/ntcir-www-2). # Data This dataset provides: - `queries` (i.e., topics); count=80 - `qrels`: (relevance assessments); count=27,627 - For `docs`, use [`irds/clueweb12_b13`](https://huggingface.co/datasets/irds/clueweb12_b13) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/clueweb12_b13_ntcir-www-2', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ...} qrels = load_dataset('irds/clueweb12_b13_ntcir-www-2', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Mao2018OWww2, title={Overview of the NTCIR-14 We Want Web Task}, author={Jiaxin Mao and Tetsuya Sakai and Cheng Luo and Peng Xiao and Yiqun Liu and Zhicheng Dou}, booktitle={NTCIR}, year={2018} } ```
irds/clueweb12_b13_ntcir-www-2
[ "task_categories:text-retrieval", "source_datasets:irds/clueweb12_b13", "region:us" ]
2023-01-05T02:58:30+00:00
{"source_datasets": ["irds/clueweb12_b13"], "task_categories": ["text-retrieval"], "pretty_name": "`clueweb12/b13/ntcir-www-2`", "viewer": false}
2023-01-05T02:58:36+00:00