Update README.md
Browse files
README.md
CHANGED
@@ -20,18 +20,18 @@ Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks
|
|
20 |
|
21 |
```
|
22 |
## About
|
23 |
-
This is
|
24 |
|
25 |
The dataset is collected from the [Chinese Civil Service Entrance Examination](chinagwy.org). The dataset is both in Chinese and English (by translation). you can download the version 1 of the LogiQA dataset from [here](https://github.com/lgw863/logiqa-dataset).
|
26 |
|
27 |
-
To construct LogiQA2.0 dataset, we:
|
28 |
-
*
|
29 |
-
*
|
30 |
-
*
|
31 |
|
32 |
## Datasets
|
33 |
### MRC
|
34 |
-
The MRC part of LogiQA2.0 dataset can be found in the `/MRC/` folder.
|
35 |
|
36 |
`train.txt`: train split of the dataset in json lines.
|
37 |
|
@@ -54,7 +54,7 @@ The MRC part of LogiQA2.0 dataset can be found in the `/MRC/` folder.
|
|
54 |
|
55 |
An example:
|
56 |
```
|
57 |
-
{"id": 10471, "answer": 0, "text": "The medieval Arabs had many manuscripts of
|
58 |
```
|
59 |
An example of the Chinese dataset:
|
60 |
```
|
@@ -75,7 +75,7 @@ An example:
|
|
75 |
{"label": "not entailed", "major_premise": ["Among the employees of a software company, there are three Cantonese, one Beijinger and three northerners"], "minor_premise": " Four are only responsible for software development and two are only responsible for product sales", "conclusion": "There may be at least 7 people and at most 12 people."}
|
76 |
```
|
77 |
## Annotations
|
78 |
-
The translation and annotation work is outsourced to [Speechocean](en.speechocean.com), the project fund is provided by Microsoft Research Asia
|
79 |
### Translation
|
80 |
|
81 |
| Final Report | |
|
@@ -87,43 +87,14 @@ The translation and annotation work is outsourced to [Speechocean](en.speechocea
|
|
87 |
|
88 |
Translation style/method:
|
89 |
|
90 |
-
1. Maintain a unified style, and the translated English questions need to inherit the logic of the original questions
|
91 |
|
92 |
-
2. The pronoun in the question
|
93 |
|
94 |
3. The translated English conforms to the form of a proper question, that is, it is a clear question from the perspective of the respondent;
|
95 |
|
96 |
### Label consistency
|
97 |
-
The label credibility is
|
98 |
|
99 |
### Additional annotations
|
100 |
-
|
101 |
-
## Baseline Guidance
|
102 |
-
### Requirements
|
103 |
-
* python 3.6+
|
104 |
-
* pytorch 1.0+
|
105 |
-
* transformers 2.4.1
|
106 |
-
* sklearn
|
107 |
-
* tqdm
|
108 |
-
* tensorboardX
|
109 |
-
|
110 |
-
We recommend to use conda to manage virtual environments:
|
111 |
-
|
112 |
-
```
|
113 |
-
conda env update --name logiqa --file requirements.yml
|
114 |
-
```
|
115 |
-
### Logiqa
|
116 |
-
The folder `logiqa` contains both the code and data to run baseline experiments of LogiQA2.0 MRC.
|
117 |
-
|
118 |
-
To fine-tune the dataset, type following command from the terminal in your :computer:
|
119 |
-
```
|
120 |
-
bash logiqa.sh
|
121 |
-
```
|
122 |
-
### Logiqa2nli
|
123 |
-
The folder `logiqa2nli` contains both the code and data to run baseline experiments of LogiQA2.0 NLI.
|
124 |
-
|
125 |
-
To fine-tune the dataset, type following command from the terminal in your :computer:
|
126 |
-
```
|
127 |
-
bash qa2nli.sh
|
128 |
-
```
|
129 |
-
Note: `./scripts` contains the scripts for running other NLI benchmarks.
|
|
|
20 |
|
21 |
```
|
22 |
## About
|
23 |
+
This is version 2 of the LogiQA dataset, first released as a multi-choice reading comprehension dataset by our previous paper [LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning](https://arxiv.org/abs/2007.08124).
|
24 |
|
25 |
The dataset is collected from the [Chinese Civil Service Entrance Examination](chinagwy.org). The dataset is both in Chinese and English (by translation). you can download the version 1 of the LogiQA dataset from [here](https://github.com/lgw863/logiqa-dataset).
|
26 |
|
27 |
+
To construct the LogiQA2.0 dataset, we:
|
28 |
+
* Collect more newly released exam questions and practice questions. There are about 20 provinces in China that hold the exam annually. The exam materials are publicly available on the Internet after the exams. Besides, practice questions are provided by various sources.
|
29 |
+
* Hire professional translators to re-translate the dataset from Chinese to English; verify the labels and annotations with human experts. This program is conducted by [Speechocean](en.speechocean.com), a data annotation service provider. The project is accomplished with the help of Microsoft Research Asia.
|
30 |
+
* Introduce a new NLI task to the dataset. The NLI version of the dataset is converted from the MRC version of the dataset, following previous work such as [Transforming Question Answering Datasets into Natural Language Inference Datasets](https://arxiv.org/abs/1809.02922).
|
31 |
|
32 |
## Datasets
|
33 |
### MRC
|
34 |
+
The MRC part of the LogiQA2.0 dataset can be found in the `/MRC/` folder.
|
35 |
|
36 |
`train.txt`: train split of the dataset in json lines.
|
37 |
|
|
|
54 |
|
55 |
An example:
|
56 |
```
|
57 |
+
{"id": 10471, "answer": 0, "text": "The medieval Arabs had many manuscripts of ancient Greek. When needed, they translate them into Arabic. Medieval Arab philosophers were very interested in Aristotle's Theory of Poetry, which was obviously not shared by Arab poets, because a poet interested in it must want to read Homer's poems. Aristotle himself often quotes Homer's poems. However, Homer's poems were not translated into Arabic until modern times.", "question": "Which of the following options, if true, strongly supports the above argument?", "options": ["Some medieval Arab translators have manuscripts of Homer poems in ancient Greek.", "Aristotle's Theory of Poetry is often quoted and commented by modern Arab poets.", "In Aristotle's Theory of Poetry, most of the content is related to drama, and medieval Arabs also wrote plays and performed them.", "A series of medieval Arab stories, such as Arab Night, are very similar to some parts of Homer's epic."], "type": {"Sufficient Conditional Reasoning": true, "Necessry Condtional Reasoning": true, "Conjunctive Reasoning": true}}
|
58 |
```
|
59 |
An example of the Chinese dataset:
|
60 |
```
|
|
|
75 |
{"label": "not entailed", "major_premise": ["Among the employees of a software company, there are three Cantonese, one Beijinger and three northerners"], "minor_premise": " Four are only responsible for software development and two are only responsible for product sales", "conclusion": "There may be at least 7 people and at most 12 people."}
|
76 |
```
|
77 |
## Annotations
|
78 |
+
The translation and annotation work is outsourced to [Speechocean](en.speechocean.com), and the project fund is provided by Microsoft Research Asia
|
79 |
### Translation
|
80 |
|
81 |
| Final Report | |
|
|
|
87 |
|
88 |
Translation style/method:
|
89 |
|
90 |
+
1. Maintain a unified style, and the translated English questions need to inherit the logic of the original questions.
|
91 |
|
92 |
+
2. The pronoun in the question needs to be unique, the translation needs to be unique and consistent without ambiguity;
|
93 |
|
94 |
3. The translated English conforms to the form of a proper question, that is, it is a clear question from the perspective of the respondent;
|
95 |
|
96 |
### Label consistency
|
97 |
+
The label credibility is manually verified after the translation is done to maintain the truthfulness of the original text. 3 workers run a consistency test on each example, if 2 or more workers give different answer compared to the original answer, the translation would be redone to guareentee the label is correct.
|
98 |
|
99 |
### Additional annotations
|
100 |
+
The reasoning types of each question are assigned by a total of 5 workers, each of whom corresponds to one reasoning type. We describe reasoning types (which can be found in our paper) to the workers. The reasoning types of each question is a collection of 5 workers' decision.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|