|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:27:53.984943Z" |
|
}, |
|
"title": "Bridging Information-Seeking Human Gaze and Machine Reading Comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Malmaud", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mit", |
|
"middle": [], |
|
"last": "Bcs", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yevgeni", |
|
"middle": [], |
|
"last": "Berzak", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension. To this end, we collect a new eye-tracking dataset with a large number of participants engaging in a multiple choice reading comprehension task. Our analysis of this data reveals increased fixation times over parts of the text that are most relevant for answering the question. Motivated by this finding, we propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior during reading comprehension. We demonstrate that this approach leads to performance gains on multiple choice question answering in English for a state-of-the-art reading comprehension model.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension. To this end, we collect a new eye-tracking dataset with a large number of participants engaging in a multiple choice reading comprehension task. Our analysis of this data reveals increased fixation times over parts of the text that are most relevant for answering the question. Motivated by this finding, we propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior during reading comprehension. We demonstrate that this approach leads to performance gains on multiple choice question answering in English for a state-of-the-art reading comprehension model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Much of the work in NLP strives to develop systems that are able to perform linguistic tasks similarly to humans. To achieve this goal, one typically provides NLP systems with human knowledge about the task at hand. This knowledge can come in the form of linguistic annotations, hand-crafted rules and access to linguistic databases, as well as various model design choices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we study the possibility of providing the model with an inductive bias by using human behavioral signals based on eye movements in reading as an additional source of information which can guide NLP models to adequately process linguistic input and solve linguistic tasks. As a case study, we examine reading comprehension, a task of central importance for probing both human and machine understanding of text. To enable this study, we collect eye movement data from 269 participants who engage in a reading comprehension task using the materials of OneStopQA (Berzak et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 573, |
|
"end": 594, |
|
"text": "(Berzak et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We argue that reading comprehension is a particularly well-suited task for linking human eye movement information to NLP modelling due to the substantial correspondence between reading times and the relevance of the text segment for answering the question. Hahn and Keller (2018) have shown this correspondence by establishing increased reading times on the correct answer in a question answering task where answers are named entities. Our study generalizes this result to an arbitrary QA setting, and demonstrates longer reading times for portions of the text which are most pertinent for answering the question correctly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 279, |
|
"text": "Hahn and Keller (2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Building on this observation, we develop a new approach to machine reading comprehension in which the model is directed to mimic human fixation times over the text, given the question. The idea behind this approach is to encourage the model to focus on question-relevant information. Specifically, we introduce a multi-task reading comprehension architecture in which a state-of-the-art transformer model jointly performs question-answering and prediction of the human reading time distribution over the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our modelling framework is behavioral, treating the reading comprehension model itself as a blackbox. This leads to both theoretical and practical advantages. From a theoretical perspective, this approach is appealing as it creates a direct parallel to human reading, in which eye movements are an external behavior. Practically, our approach has the advantage of being modular, allowing swapping our model with other reading comprehension models, and the task with other NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our experiments demonstrate that our approach leads to consistent gains in question-answering performance across different training regimes, model variants, and on both in-and out-of-domain evalu-ations. In particular, our model outperforms baseline models with gaze from human reading without exposure to the question. It also performs better than using manual annotations of the textual span critical for answering the question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To summarize, we present three contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. We collect an eye-tracking dataset with a large number of participants engaging in free-form multiple choice question answering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. We show that human gaze behavior during question answering is strongly taskconditioned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "3. We demonstrate that human gaze can improve the performance of a state-of-the-art reading comprehension model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While this work is a proof of concept and uses a relatively costly data collection procedure, as eye-tracking technology continues to become more ubiquitous and affordable, it will be feasible to perform large scale data collection and deployment of similar approaches for QA and other NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work contributes to two areas of research. The first is how human gaze is conditioned on the reading task. This question was previously investigated in the domain of question answering by Hahn and Keller (2018) , who collected eye-tracking data in an experimental setup similar to ours for materials from the CNN and Daily Mail corpus (Hermann et al., 2015) . They demonstrate that reading times on the named entity which is the correct answer to the question are longer if participants are shown the question before reading the passage as compared to ordinary reading. Our work builds on this result, introducing a more general QA setup which is not restricted to questions whose answer is a named entity. Crucially, we further leverage this information for improving machine question answering. The second research area to which or work contributes is augmenting NLP models with gaze data. In this area, gaze during reading has been used for tasks such as syntactic annotation (Barrett and S\u00f8gaard, 2015a,b; Barrett et al., 2016; Strzyz et al., 2019 ), text compression (Klerke et al., 2016 ), text readability (Gonz\u00e1lez-Gardu\u00f1o and S\u00f8gaard, 2017), Named Entity Recognition (Hollenstein and Zhang, 2019) , and sentiment classification (Mishra et al., 2016 (Mishra et al., , 2017 (Mishra et al., , 2018 . Work on the first four tasks used task-independent eye-tracking corpora, primarily the Dundee corpus (Kennedy et al., 2003) and GECO (Cop et al., 2017) . For the task of sentiment classification, the authors used task specific eye-tracking corpora in which the participants were asked to perform sentiment classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 214, |
|
"text": "Hahn and Keller (2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 361, |
|
"text": "(Hermann et al., 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 983, |
|
"end": 1013, |
|
"text": "(Barrett and S\u00f8gaard, 2015a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1014, |
|
"end": 1035, |
|
"text": "Barrett et al., 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1036, |
|
"end": 1055, |
|
"text": "Strzyz et al., 2019", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1076, |
|
"end": 1096, |
|
"text": "(Klerke et al., 2016", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1180, |
|
"end": 1209, |
|
"text": "(Hollenstein and Zhang, 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1241, |
|
"end": 1261, |
|
"text": "(Mishra et al., 2016", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1262, |
|
"end": 1284, |
|
"text": "(Mishra et al., , 2017", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1285, |
|
"end": 1307, |
|
"text": "(Mishra et al., , 2018", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1411, |
|
"end": 1433, |
|
"text": "(Kennedy et al., 2003)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1443, |
|
"end": 1461, |
|
"text": "(Cop et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our study differs from this literature in several aspects. First, we address the previously unexplored task of reading comprehension, which has established theoretical and empirical connections to eye movements in reading (Just and Carpenter, 1980; Reichle et al., 2010; Rayner et al., 2016; Hahn and Keller, 2018, among others) . Also differently from these studies, we cover and directly compare both a task specific reading condition (Hunting) and a task-independent condition (Gathering), as well as both external (Dundee) and corpus specific (OneStopQA) eye-tracking data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 248, |
|
"text": "(Just and Carpenter, 1980;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 270, |
|
"text": "Reichle et al., 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 291, |
|
"text": "Rayner et al., 2016;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 328, |
|
"text": "Hahn and Keller, 2018, among others)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our QA task can be viewed as a generalization of the work in Mirsha et al. (2016; , where instead of being asked about the sentiment of a paragraph, subjects are presented with arbitrary questions. Our multitask approach for jointly performing the QA task and predicting gaze is similar to Klerke et al. (2016) , Berrett et al. 2018and Mishra et al. (2018) . In particular, in Equation 4 we use the same loss term as Barrett et al. (2018) which consists of a linear combination of an NLP task loss and gaze prediction loss. Our approach differs from Barrett et al. (2018) in that their model uses the gaze predictions as input attention weights for the NLP task, while our model treats gaze only as an output. Our approach provides a parallel to human reading, in which eye movements are an external behavior rather than an input to language processing tasks. Our work differs from Mishra et al. (2018) in the model and the use of a single auxiliary objective based on gaze. Finally, we note that in Vajjala et al. (2016) eye-tracking data from ESL learners was collected for 4 articles from the same source of OneStopEnglish articles (Vajjala and Lu\u010di\u0107, 2018) used here, and utilized to study the influence of text difficulty level on fixation measures and reading comprehension. Our work focuses on a different task and a different population of readers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 81, |
|
"text": "Mirsha et al. (2016;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 310, |
|
"text": "Klerke et al. (2016)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 356, |
|
"text": "Mishra et al. (2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 438, |
|
"text": "Barrett et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 571, |
|
"text": "Barrett et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 902, |
|
"text": "Mishra et al. (2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1000, |
|
"end": 1021, |
|
"text": "Vajjala et al. (2016)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A large body of work exists on QA, including span prediction (e.g. BiDAF (Seo et al., 2017) ), cloze (e.g. (Hermann et al., 2015)), and multiple choice QA (e.g. Stanford Attentive Reader (Chen et al., 2016) ). Here, we focus on multiple choice QA due to its prevalence in human evaluations of reading comprehension, and use RoBERTa due to its state-of-the-art performance on this task. Further, neural models for QA deploy various notions of internal attention. The study of NLP model internal attention has drawn much interest in recent years (Adi et al., 2017; Clark et al., 2019; Serrano and Smith, 2019; Kovaleva et al., 2019; Hoover et al., 2019, among others) . In this work we abstract away from model internal dynamics due to their complexity, and the theoretical justification for treating gaze as an external behavior rather than an internal model property. Examination of internal model attention and its relation to human gaze is however an intriguing research direction that we intend to pursue in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 91, |
|
"text": "(Seo et al., 2017)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 206, |
|
"text": "(Chen et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 562, |
|
"text": "(Adi et al., 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 582, |
|
"text": "Clark et al., 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 583, |
|
"end": 607, |
|
"text": "Serrano and Smith, 2019;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 608, |
|
"end": 630, |
|
"text": "Kovaleva et al., 2019;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 665, |
|
"text": "Hoover et al., 2019, among others)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use two reading comprehension resources, OneStopQA (Berzak et al., 2020) and RACE (Lai et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Berzak et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 103, |
|
"text": "(Lai et al., 2017)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "OneStopQA is a reading comprehension dataset containing paragraph-level multiple choice reading comprehension questions for 30 Guardian articles (162 paragraphs) taken from the OneStopEnglish dataset (Vajjala and Lu\u010di\u0107, 2018) . Each article is available in three parallel text difficulty levels: the original Advanced text and two simplified versions, Intermediate and Elementary. Each paragraph has three multiple choice reading comprehension questions. All the questions are answerable based on any of the text level versions of the paragraph. We use the Advanced and Elementary text versions, corresponding to 972 question-paragraph pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 225, |
|
"text": "(Vajjala and Lu\u010di\u0107, 2018)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The answers for each OneStopQA question are structured as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A is the correct answer. Answering a question correctly requires information from a textual span in the paragraph called the critical span. Importantly, the critical span does not contain the answer in verbatim form. B is a distractor which represents a plausible miscomprehension of the critical span. C is a distractor which is anchored in an additional span in the paragraph, called the distractor span. D is a distractor which has no support in the text. Both the critical span and the distractor span are annotated manually in the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "RACE is the standard dataset in NLP for training and evaluation of multiple choice reading com-prehension. It comprises reading comprehension examination materials for middle school and high school students in China. Similarly to OneStopQA, RACE questions are multiple choice, with four possible answers for each question. As opposed to OneStopQA, the questions are based on an entire article rather than a specific paragraph and the answers have no systematic structure with respect to the text. Although RACE has been widely used in NLP, it was recently shown that it has substantial quality assurance drawbacks; 47% of its questions are guessable by RoBERTa without the passage, and 18% do not have a unique correct answer (Berzak et al., 2020) . We therefore treat RACE as a secondary evaluation benchmark. Statistics on the reading comprehension materials are presented in Table 1 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 726, |
|
"end": 747, |
|
"text": "(Berzak et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 878, |
|
"end": 885, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reading Comprehension Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We collected a dataset of eye movements for the 30 OneStopQA articles. The articles are divided into three 10-article batches with 54 paragraphs in each batch. Each participant read a single 10article batch. Following the experimental setup of (Hahn and Keller, 2018), a given batch is presented in one of two possible between subject conditions: Hunting and Gathering. In the Hunting condition participants are presented with the question prior to reading the text, while in the Gathering condition the question is provided only after the participant has completed reading the text. A single experiment trial consists of reading a paragraph and answering one reading comprehension question about it. In the Hunting condition, a trial has 5 pages in which the screen shows one page at a time. In the first page, the participant reads the question (henceforth question preview page). In the second page, they read the paragraph. In the third page they read the question again. The fourth page retains the question, and also displays the four answers. After choosing one of the answers, the fifth page informs the participant on whether they answered the question correctly. The Gathering condition is identical to the Hunting condition, except that participants are not presented with the question preview page. Consequently, subjects in this condition have to be prepared for any question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OneStopQA Eye-Tracking Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Each trial was randomly assigned to one of six conditions in a Latin square design, where each condition is a combination of one of the three questions and one of the two paragraph levels. The presentation order of the articles and the assignment of answers to A -D letters was randomized. Eye movements were recorded using an EyeLink 1000 Plus eye tracker (SR Research) at a sampling rate of 1000Hz. The experiment duration was typically 1 -1.5 hours. Further details on the eye-tracking experiment are provided in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OneStopQA Eye-Tracking Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We collected data from 269 participants, with an average of 7.5 participants per trial (questionparagraph level pair). We excluded trials in which participants did not answer the question correctly, remaining with 6.3 participants per trial. The overall question answering accuracy rate was 86.9% in the Hunting condition and 81.9% in the Gathering condition, which is lower (p < 10 \u22124 ). 1 Figure 1 : Mean Total Fixation Duration inside and outside the critical span in the Hunting (with question preview) and Gathering (without question preview) conditions. Error bars correspond to a 95% confidence interval from a mixed-effects model that accounts for variation of fixation durations across subjects and questions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 391, |
|
"end": 399, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OneStopQA Eye-Tracking Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1 Satterhwaite's method on a mixed-effects model: correct \u223c preview + (preview||subject) + (preview||example).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "OneStopQA Eye-Tracking Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We motivate our approach by demonstrating that human gaze distributions are strongly conditioned on the reading comprehension task. This conditioning has been previously established for the case of named entities (Hahn and Keller, 2018) , and we examine it here in a more general QA setting. Specifically, we consider speed-normalized Total Fixation Duration; for each subject, we take the Total Fixation Duration (i.e. sum of all the fixation times) on a word and normalize it by the subject's total reading time for the passage. Consider the example in Figure 2 , where we visualize the speednormalized gaze on each word averaged across subjects for the same question -paragraph pair in the Hunting (with question preview) and Gathering (without question preview) conditions. As can be seen from the heatmaps, the gaze distributions are fundamentally different between these conditions. In particular, in the Hunting condition we observe a noticeable correspondence between gaze and the annotated critical span. Although the degree of correspondence between gaze and the critical span in the Hunting condition depends on the specifics of the question and the text, the presented example is representative of a large portion of our items. To further substantiate this observation, in Figure 1 we compare the average Total Fixation Duration within versus outside the critical span in both the Hunting and Gathering conditions. We observe that in the Hunting condition, reading times are significantly longer within the critical span compared to outside of the critical span (p < 10 \u221215 ), 2 while in the Gathering condition they are slightly shorter within the critical span (p < 10 \u22124 ). The difference between within-span vs outside-of-span reading times between Hunting and Gathering conditions is also significant (p < 10 \u221215 ). We further note that the total reading time for the passage is shorter in the Hunting condition (p < 10 \u22124 ), consistent with more targeted reading as compared to the Gathering condition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "(Hahn and Keller, 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 555, |
|
"end": 563, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1285, |
|
"end": 1294, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Conditioned Gaze in Human Reading Comprehension", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "While our analysis provides evidence for an increased concentration of gaze time around text that is critical for answering the question, the potential utility of human gaze is not limited to this aspect alone. Human gaze can be viewed as a soft form of text annotation that relates the entire text to cognitive load during processing. In particular, it can in principle provide valuable fine-grained information within the critical span.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question Conditioned Gaze in Human Reading Comprehension", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To test the effectiveness of utilizing human gaze data for enhancing the performance of a reading comprehension system, we trained a reading comprehension model to perform the same multiple choice task as the human subjects. We then conducted a series of controlled experiments to assess how the accuracy of the model is affected by providing it with human eye movements information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method: Joint Question Answering and Human Gaze Prediction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We utilize the RoBERTa transformer architecture, which has shown state-of-the-art performance on the multiple choice reading comprehension task (Liu et al., 2019) . We experiment with both the Base and the Large variants of this model. To allow RoBERTa to benefit from the gaze data, we use multi-task learning with hard parameter sharing (Caruana, 1993) , and modify RoBERTa to jointly predict the answer to each question and the human gaze times allocated to each passage word. Each multiple-choice example is composed of the passage d, the question Q, and the four possible answers {y 1 , y 2 , y 3 , y 4 }. We follow the standard procedure for using transformer architectures for multiple-choice tasks, concatenating the passage, question, and answer [CLS, d, SEP, Q, y] for each possible answer y. The resulting string is encoded through RoBERTa. We then take the final embedding of the CLS token for each answer and run it through a logistic layer to return the probability of each answer being correct. This probability is used to calculate a cross-entropy QA loss term L QA for each example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 162, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 354, |
|
"text": "(Caruana, 1993)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 774, |
|
"text": "[CLS, d, SEP, Q, y]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L QA = \u2212 log p(y c )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Where y c is the correct answer for the question. Figure 3 : Model diagram. The model uses the standard transformer architecture for multiple choice QA, augmented to simultaneously predict human reading times over the passage.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We additionally calculate an auxiliary loss based on gaze information. As in Figure 2 , our reference metric RT (w) is speed-normalized Total Fixation Duration (T F ). Specifically, for each passage word w and subject s, we consider the subject's Total Fixation Duration on the word T F s (w) normalized by the sum of all their fixation durations over the passage, and then average this quantity across all subjects who read the passage.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 85, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "RT (w) = 1 S s T F s (w) w T F s (w )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In cases where RoBERTa's byte pair tokenizer (Sennrich et al., 2016 ) splits a single word into multiple tokens, we evenly split the gaze time associated with the word among the resulting tokens. We take the encoding of each passage word at the last layer of RoBERTa for each candidate answer y and add a linear layer parameterized by a weight vector v \u2208 R d shared across all passage word positions, where d is the RoBERTa embedding dimension. For each passage word w, this layer maps from the d-dimensional word embedding to a scalar gaze value. These values are put through a softmax layer, obtaining predictions RT predy (w) which are guaranteed to be between 0 and 1 for each word and sum to 1 for each passage, making them comparable to our normalized human gaze measurements RT . These predictions are then averaged across the four possible answers to obtain reading time predictions for each passage word RT pred (w). Finally, we compute the cross-entropy loss between the gaze predictions and observed human gaze.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 67, |
|
"text": "(Sennrich et al., 2016", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "L gaze = \u2212 w RT (w) log RT pred (w) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The final loss term is a convex combination of the gaze loss term and the answer prediction loss, where a hyperparameter \u03b1 is the relative weight assigned to the gaze loss term: Figure 3 presents a diagram of our model. Our modelling approach is fundamentally behavioral, as it attempts to mimic human eye movements as an external behavior. It treats the model itself largely as a black-box, relying only on the model's final query-conditioned representations of the passage words. It is therefore also modularthe RoBERTa model can be substituted with any QA model which provides passage word representations. Furthermore, our framework is compatible not only with the multiple choice variant of the QA task, but also with other answer output formats.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 186, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "L = (1 \u2212 \u03b1)L QA + \u03b1L gaze (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also note that the standard multiple choice QA transformer architecture requires a copy of the passage and the question for each answer, and thus the reading time predictions are generated for each copy and averaged. In QA models were the query and passage are encoded only once, such averaging would not be required. Further, other architectures are conceivable for joint multiple choice QA and gaze prediction. In particular, one may consider architectures which do not include the answers for gaze prediction; for example, through soft parameter-sharing multi-task approaches. We chose hard parameter sharing as it enables predicting gaze with only a minimal architecture change and a small number of additional parameters to the standard multiple choice QA transformer model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Each experiment consists of a training set of QA examples from OneStopQA accompanied by gaze data, a development set, and a test set. For each experiment, we fine-tune an initial model for 15 epochs for each \u03b1 \u2208 [0, .2, .4, .6, .8, 1.0] . We pick the epoch and \u03b1 that have the highest questionanswering accuracy on the development set and report accuracy on the test set. For experiments on OneStopQA, we perform five-fold cross validation where each fold has 18 training articles, 6 development articles and 6 test articles. Each article appears three times in train, once in dev and once in test across the 5 folds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 236, |
|
"text": "[0, .2, .4, .6, .8, 1.0]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Procedure", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We test two initial models:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "1. No RACE fine-tuning using RoBERTa that has not been fine-tuned for QA on RACE. This experiment shows the value of incorporating eye-tracking data in data-scarce scenarios where only a small amount of data is available for fine-tuning on the given task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "2. With RACE fine-tuning using RoBERTa that has been fine-tuned on RACE to perform multiple choice question answering, following the procedure in (Liu et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 164, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For each fine-tuning regime, we test the model for two levels of generalization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "1. Within-domain where we use our five-fold cross validation setup to train and test on OneStopQA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "2. Out-of-domain where we train on all 30 OneStopQA articles and use the RACE dev and test sets for development and testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We note that in addition to the quality assurance issues with RACE mentioned in Section 3.1, the out-of-domain RACE evaluations are particularly challenging due to substantial differences in the genres and questions types between OneStopQA and RACE, and the small size of OneStopQA as compared to RACE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conditions", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We compare our model with two baselines which do not use auxiliary loss. We further introduce four auxiliary loss models, which replace Hunting condition gaze with alternative information sources for measuring the importance of each passage word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "These two baselines do not utilize the auxiliary loss during model fine-tuning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. No OneStopQA Fine-tuning The model is not fine-tuned for QA on OneStopQA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. With OneStopQA Fine-tuning The model is fine-tuned for QA on OneStopQA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "No Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These four models are fine-tuned for QA on OneStopQA, and use an auxiliary loss where gaze in the Hunting condition is replaced with other ways for weighting each word in the passage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "With Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Question-Passage Similarity In this baseline, the auxiliary information is based on the similarity between the question and each passage word. We encode the question and the passage separately with an off-the-shelf encoder (here, RoBERTa that has not been fine-tuned for question-answering) and compute the dot-product between each encoded passage word and the final encoding of the question's CLS token. These values are then normalized by applying a softmax function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "With Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. Gaze Gathering Dundee Here, we utilize gaze data from the Dundee corpus (Kennedy et al., 2003) , allowing us to examine the benefit of predicting gaze on the same texts used for QA, versus unrelated texts. We split each Dundee article into passages of size equal to the average OneStopQA passage (125 words), yielding 453 passages. We then normalize the average Total Fixation Duration across Dundee's 10 subjects as for OneStopQA. In each training step, we predict answers on one tuning denotes whether the model has been first fine-tuned for QA on RACE. The first two rows are baselines without an auxiliary loss, without and with QA fine-tuning on OneStopQA. The following four rows are baselines fine-tuned for QA on OneStopQA with auxiliary loss, using four different alternatives for measuring the importance of each passage word. The last row is our primary model variant, which uses gaze in the Hunting condition. All the results with OneStopQA fine-tunings are averaged over three runs of the model. batch of OneStopQA questions and gaze distributions on one batch of Dundee paragraphs chosen at random, and perform a step of gradient descent. This interleaved procedure is similar to that used by Barrett et al. (2018) , and is analogous to the other baselines, where we predict answers on one batch of OneStopQA examples and gaze distribution on those same examples for each gradient descent step.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 97, |
|
"text": "(Kennedy et al., 2003)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1210, |
|
"end": 1231, |
|
"text": "Barrett et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "With Auxiliary Loss", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this method, we use gaze data from the Gathering variant of the OneStopQA reading experiment where subjects do not see the question before seeing the paragraph they will later be questioned about, and hence their gaze is necessarily not question-dependent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaze Gathering OneStopQA", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "OneStopQA, each question includes a manual annotation which indicates the span in the passage which is critical for answering the question. We assign a gaze value of 1 to the tokens within the span and 0 to those outside it, and normalize with softmax as before. This corresponds to a theoretical subject who looks equally at each word within the critical span and not anywhere else in the passage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical Span Annotations OneStopQA In", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We note that the last two baselines are new methods for improving machine QA using human-generated behavioral data (gaze and span annotations) that have not been previously proposed in the literature, and constitute very strong alternatives to our model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Critical Span Annotations OneStopQA In", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Our results are summarized in Table 2 . All the results involving OneStopQA fine-tunings are averaged across three runs. In the following, p values are indicated when the difference in the performance of the compared models is statistically significant at the p < 0.05 level.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 37, |
|
"text": "Table 2", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Fine-tuning the model for QA on OneStopQA is most beneficial in the two resource-lean regimes when the model has not been previously fine-tuned on RACE (p < 10 \u221210 , Wald test). Similarly, adding auxiliary loss to the QA model in these two regimes has a substantially larger impact on model performance compared to performing prior fine-tuning on RACE (p < 10 \u22128 for all baselines).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In our within-domain evaluations on OneStopQA, we observe improvements of our model over all the baselines in all evaluations, except for the case of the Large model without RACE fine-tuning where our model comes second. We also observe improvements in the out-ofdomain evaluations on RACE. When the Large model is fine-tuned for QA only on OneStopQA, it obtains an accuracy of 53.0, reflecting a 0.4 improvement over the strongest baseline. The Base model comes second in this evaluation. When first fine-tuning the model for QA on RACE, then performing additional fine-tuning on OneStopQA, the Base model obtains an improvement of 0.1 over the strongest auxiliary loss baseline. For the Large model we observe a similar improvement when using gaze, with the same performance in the Hunting and Gathering conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Interestingly, we do not observe a consistent ordering in the performance of the baselines. In particular, we do not observe a clear advantage of using gaze in the Gathering condition over Question-Passage Similarity. We also obtain comparable performance when gaze data in the Gathering condition comes from OneStopQA and Dundee. Notably, in nearly all the evaluations our model performs better compared to the manual Critical Span Annotation baseline. We hypothesize that this may be because the annotated spans do not capture potential inter-annotator variation in span annotations, as well as within-span information which is informative for our task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We note that while the gains over the strongest baselines are not statistically significant at the .05 level, the overall consistent pattern across evaluation regimes suggests the promise of using Hunting gaze data as the target of the auxiliary loss objective over any other single baseline. Finally, we note that an \u03b1 of 0.2 -0.4 was most often chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We present a framework for performing automated reading comprehension in a human-like fashion, yielding performance gains for a state-of-the art reading comprehension model. Our work also contributes to the study of human reading, providing evidence for a systematic conditioning of human reading on the reading comprehension task. In the future we intend to study the relation between gaze and internal model attention, and further explore the relation between gaze, task and task performance in QA and well as other tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "performed upon failure to trigger the text at the beginning of a trial as described below. The experimenters were instructed repeat calibration until an average validation error below 0.3 \u2022 was reached.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Prior to the presentation of the question preview, paragraph and question, participants were presented with a page presenting a fixation target located at (300, 186), the same position as the first letter of text on the following page. The targets were q for the question preview, p for the paragraph and Q for the question. The presentation of the following text page was triggered by a fixation of at least 250ms within a 39px\u00d748px rectangular area centered around the 19px\u00d738px area of the target letter. This corresponds to a horizontal margin of about half a letter width, and vertical margin of about quarter of a line space around the target letter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Triggering and Recalibration", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Failure to produce a 250ms fixation within 4 seconds on the first target of the trial (q target in the Hunting condition and p target in the Gathering condition), automatically triggered recalibration. For subsequent trial targets (p and Q in the Hunting condition and Q in the Gathering condition) the next page was presented even if the participant was not able to produce a 250ms fixation on the target letter within 4 seconds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Triggering and Recalibration", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This and subsequent tests are calculated using Satterthwaite's method applied to a mixed-effects model that treats subjects and questions as crossed random effects. Using R formula notation, the model is gaze \u223c span * condition + (span||subject)+(condition*span||example)). Tests were performed with the lme4 and lmerTest R packages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We gratefully acknowledge support from Elemental Cognition and from NSF grant IIS1815529, a Google Faculty Research Award, and a Newton Brain Science Award to RPL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used a Tower Mount Eyelink 1000 Plus eye tracker (SR Research) at a sampling rate of 1000Hz. Eye movements were recorded for participants' dominant eye.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The experiment was presented on a 27inch monitor (Dell U2715H) with a display area of 597mm\u00d7336mm, resolution of 2560px\u00d71440px and refresh rate of 60Hz. Participants' eye level was 750mm away from the top of the monitor's display area and 795 away from its bottom. In this setup participants eyes were about 45mm below top of the monitor's display, approximately at the same height as the top most position of the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monitor", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Participants used a controller (Logitech Gamepad F310) during the experiment. The button A was used for proceeding to the next page after finishing reading as well as for confirming the answer selection. The four buttons of the directional pad were used for choosing answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Controller", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used the Lucida Sans Typewriter monospace font, with font size of 25pt (each letter occupying 19px\u00d738px). We used triple spacing (76px) between lines. The top left position of the questions and the paragraphs was (300, 186) with a text area width of 1824px (96 characters). Questions were 1-2 lines and paragraphs were 3-10 lines. Answers were presented in a cross arrangement, with text width of 700px, and were 1-3 lines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used 9 point calibration with bulls-eye targets (18px outer circle 6px inner circle). Calibration was performed at least 3 times during the experiment: once at the beginning of the experiment and once after each of two breaks. Calibration was also", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calibration", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Adi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Einat", |
|
"middle": [], |
|
"last": "Kermany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Lavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Sequence classification with human attention", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Bingel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nora", |
|
"middle": [], |
|
"last": "Hollenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "302--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders S\u00f8gaard. 2018. Sequence classification with human attention. In CoNLL, pages 302-312.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Weakly supervised part-ofspeech tagging using eye-tracking data", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Bingel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "579--584", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Barrett, Joachim Bingel, Frank Keller, and An- ders S\u00f8gaard. 2016. Weakly supervised part-of- speech tagging using eye-tracking data. In ACL, vol- ume 2, pages 579-584.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reading behavior predicts syntactic categories", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "345--349", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015a. Reading behavior predicts syntactic categories. In CoNLL, pages 345-349.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using reading behavior to predict grammatical functions", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Workshop on Cognitive Aspects of Computational Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015b. Using read- ing behavior to predict grammatical functions. In Workshop on Cognitive Aspects of Computational Language Learning, pages 1-5.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "STARC: Structured annotations for reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Yevgeni", |
|
"middle": [], |
|
"last": "Berzak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Malmaud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yevgeni Berzak, Jonathan Malmaud, and Roger Levy. 2020. STARC: Structured annotations for reading comprehension. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multitask learning: A knowledgebased source of inductive bias", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1993. Multitask learning: A knowledge- based source of inductive bias. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A thorough examination of the cnn/daily mail reading comprehension task", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2358--2367", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen, Jason Bolton, and Christopher D Man- ning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. pages 2358- 2367.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "What does BERT look at? an analysis of BERT's attention", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Urvashi", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Black-boxNLP Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Black- boxNLP Workshop, pages 276-286.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Presenting geco: An eyetracking corpus of monolingual and bilingual sentence reading", |
|
"authors": [ |
|
{ |
|
"first": "Uschi", |
|
"middle": [], |
|
"last": "Cop", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Dirix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Drieghe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wouter", |
|
"middle": [], |
|
"last": "Duyck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Behavior research methods", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "602--615", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting geco: An eyetracking cor- pus of monolingual and bilingual sentence reading. Behavior research methods, 49(2):602-615.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Using gaze to predict text readability", |
|
"authors": [ |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Valeria Gonz\u00e1lez-Gardu\u00f1o", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "438--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ana Valeria Gonz\u00e1lez-Gardu\u00f1o and Anders S\u00f8gaard. 2017. Using gaze to predict text readability. In Pro- ceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 438-443.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Modeling task effects in human reading with neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Hahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.00054" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Hahn and Frank Keller. 2018. Modeling task effects in human reading with neural attention. arXiv preprint arXiv:1808.00054.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Teaching machines to read and comprehend", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Moritz Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Kocisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Espeholt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1693--1701", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NeurIPS, pages 1693-1701.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Entity recognition at first sight: Improving NER with eye movement information", |
|
"authors": [ |
|
{ |
|
"first": "Nora", |
|
"middle": [], |
|
"last": "Hollenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ce", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nora Hollenstein and Ce Zhang. 2019. Entity recog- nition at first sight: Improving NER with eye move- ment information. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "exBERT: A visual analysis tool to explore learned representations in transformers models", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Hoover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Strobelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.05276" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2019. exBERT: A visual analysis tool to explore learned representations in transformers mod- els. arXiv preprint arXiv:1910.05276.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A theory of reading: From eye fixations to comprehension", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marcel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patricia", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Just", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Psychological review", |
|
"volume": "87", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcel A Just and Patricia A Carpenter. 1980. A the- ory of reading: From eye fixations to comprehension. Psychological review, 87(4):329.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The dundee corpus", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00ebl", |
|
"middle": [], |
|
"last": "Pynte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "European conference on eye movement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Kennedy, Robin Hill, and Jo\u00ebl Pynte. 2003. The dundee corpus. In European conference on eye movement.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Improving sentence compression by learning to predict gaze", |
|
"authors": [ |
|
{ |
|
"first": "Sigrid", |
|
"middle": [], |
|
"last": "Klerke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sigrid Klerke, Yoav Goldberg, and Anders S\u00f8gaard. 2016. Improving sentence compression by learning to predict gaze. In NAACL-HLT.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Revealing the dark secrets of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Kovaleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Romanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rumshisky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4365--4374", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1445" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In EMNLP, pages 4365-4374.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Race: Large-scale reading comprehension dataset from examinations", |
|
"authors": [ |
|
{ |
|
"first": "Guokun", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qizhe", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanxiao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "785--794", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale read- ing comprehension dataset from examinations. In EMNLP, pages 785-794.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "RoBERTa: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network", |
|
"authors": [ |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuntal", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "377--387", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhijit Mishra, Kuntal Dey, and Pushpak Bhat- tacharyya. 2017. Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In ACL, pages 377-387.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Leveraging cognitive features for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diptesh", |
|
"middle": [], |
|
"last": "Kanojia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seema", |
|
"middle": [], |
|
"last": "Nagar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuntal", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016. Leveraging cognitive features for sentiment analysis. In CoNLL.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Cognition-cognizant sentiment analysis with multitask subjectivity summarization based on annotators' gaze behavior", |
|
"authors": [ |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srikanth", |
|
"middle": [], |
|
"last": "Tamilselvam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riddhiman", |
|
"middle": [], |
|
"last": "Dasgupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seema", |
|
"middle": [], |
|
"last": "Nagar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuntal", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhijit Mishra, Srikanth Tamilselvam, Riddhiman Dasgupta, Seema Nagar, and Kuntal Dey. 2018. Cognition-cognizant sentiment analysis with multi- task subjectivity summarization based on annotators' gaze behavior. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "So much to read, so little time: How do we read, and can speed reading help?", |
|
"authors": [ |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Schotter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Masson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Potter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Treiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Psychological Science in the Public Interest", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "4--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keith Rayner, Elizabeth R Schotter, Michael EJ Mas- son, Mary C Potter, and Rebecca Treiman. 2016. So much to read, so little time: How do we read, and can speed reading help? Psychological Science in the Public Interest, 17(1):4-34.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Eye movements during mindless reading", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Erik D Reichle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Reineberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schooler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Psychological Science", |
|
"volume": "21", |
|
"issue": "9", |
|
"pages": "1300--1310", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik D Reichle, Andrew E Reineberg, and Jonathan W Schooler. 2010. Eye movements during mindless reading. Psychological Science, 21(9):1300-1310.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Bidirectional attention flow for machine comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aniruddha", |
|
"middle": [], |
|
"last": "Kembhavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Is attention interpretable?", |
|
"authors": [ |
|
{ |
|
"first": "Sofia", |
|
"middle": [], |
|
"last": "Serrano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2931--2951", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? pages 2931-2951.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Towards making a dependency parser see", |
|
"authors": [ |
|
{ |
|
"first": "Michalina", |
|
"middle": [], |
|
"last": "Strzyz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vilares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michalina Strzyz, David Vilares, and Carlos G\u00f3mez- Rodr\u00edguez. 2019. Towards making a dependency parser see. In EMNLP-IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Onestopenglish corpus: A new corpus for automatic readability assessment and text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Sowmya", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivana", |
|
"middle": [], |
|
"last": "Lu\u010di\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sowmya Vajjala and Ivana Lu\u010di\u0107. 2018. On- estopenglish corpus: A new corpus for automatic readability assessment and text simplification. In Workshop on Innovative Use of NLP for Building Ed- ucational Applications.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Towards grounding computational linguistic approaches to readability: Modeling reader-text interaction for easy and difficult texts", |
|
"authors": [ |
|
{ |
|
"first": "Sowmya", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Detmar", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Eitel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katharina", |
|
"middle": [], |
|
"last": "Scheiter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sowmya Vajjala, Detmar Meurers, Alexander Eitel, and Katharina Scheiter. 2016. Towards grounding computational linguistic approaches to readability: Modeling reader-text interaction for easy and dif- ficult texts. In Proceedings of the Workshop on Computational Linguistics for Linguistic Complex- ity (CL4LC), pages 38-48, Osaka, Japan. The COL- ING 2016 Organizing Committee.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Outside of critical spanPer-word Total Fixation Duration (ms)", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "0", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Example of gaze distributions in the Hunting and Gathering conditions for an Elementary level paragraph. The color of each word corresponds to its Total Fixation Duration divided by the overall passage reading time, averaged across participants. The critical span appears in bold red. The distractor span appears in purple italics.", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Question Answering accuracy for RoBERTa Base and Large on OneStopQA and RACE. RACE Fine-", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |