File size: 3,302 Bytes
0cb5e79 15e6797 0cb5e79 15e6797 c78458f 15e6797 c78458f 0cb5e79 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
language:
- it
language_details: it-IT
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
task_ids:
- text-classification
configs:
- config_name: default
data_files:
- split: test_1
path: "multichoice_v1_test.jsonl"
- split: dev_1
path: "multichoice_v1_dev.jsonl"
- split: test_2
path: "multichoice_v2_test.jsonl"
- split: dev_2
path: "multichoice_v2_dev.jsonl"
size_categories:
- n<1K
---
### QA4FAQ @ EVALITA 2016
Original dataset information available [here](http://qa4faq.github.io/)
## Data format
The data has been converted to be used as a questin answering task.
There are two splits, test-1 and test-2, each containing the same data processed in slightly different ways.
### test-1
The data is in jsonl format, where each line is a json object with the following fields:
- `id`: a unique identifier for the question
- `question`: the question
- `A`, `B`, `C`, `D`: the possible answers to the question
- `correct_answer`: correct answer to the question ('A', 'B', 'C', 'D')
wrong answers are randomly drawn from the other question, answers pairs in the dataset.
### test-2
The data is in jsonl format, where each line is a json object with the following fields:
- `id`: a unique identifier for the question
- `question`: the question
- `A`, `B`, `C`, `D`: the possible question,answers pairs e.g. (question, answer)
- `correct_answer`: correct question,answer pair to the question ('A', 'B', 'C', 'D')
wrong (q,a) pairs are randomly created by randomy choosing answers from the dataset.
## Publications
```
@inproceedings{agirre-etal-2015-semeval,
title = "{S}em{E}val-2015 Task 2: Semantic Textual Similarity, {E}nglish, {S}panish and Pilot on Interpretability",
author = "Agirre, Eneko and
Banea, Carmen and
Cardie, Claire and
Cer, Daniel and
Diab, Mona and
Gonzalez-Agirre, Aitor and
Guo, Weiwei and
Lopez-Gazpio, I{\~n}igo and
Maritxalar, Montse and
Mihalcea, Rada and
Rigau, German and
Uria, Larraitz and
Wiebe, Janyce",
editor = "Nakov, Preslav and
Zesch, Torsten and
Cer, Daniel and
Jurgens, David",
booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
month = jun,
year = "2015",
address = "Denver, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S15-2045",
doi = "10.18653/v1/S15-2045",
pages = "252--263",
}
```
```
@inproceedings{nakov-etal-2015-semeval,
title = "{S}em{E}val-2015 Task 3: Answer Selection in Community Question Answering",
author = "Nakov, Preslav and
M{\`a}rquez, Llu{\'\i}s and
Magdy, Walid and
Moschitti, Alessandro and
Glass, Jim and
Randeree, Bilal",
editor = "Nakov, Preslav and
Zesch, Torsten and
Cer, Daniel and
Jurgens, David",
booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
month = jun,
year = "2015",
address = "Denver, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S15-2047",
doi = "10.18653/v1/S15-2047",
pages = "269--281",
}
```
|