ACL-OCL / Base_JSON /prefixC /json /cmcl /2021.cmcl-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:16:37.698584Z"
},
"title": "KonTra at CMCL 2021 Shared Task: Predicting Eye Movements by Combining BERT with Surface, Linguistic and Behavioral Information",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Konstanz",
"location": {}
},
"email": ""
},
{
"first": "Aikaterini-Lida",
"middle": [],
"last": "Kalouli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Konstanz",
"location": {}
},
"email": ""
},
{
"first": "Diego",
"middle": [],
"last": "Frassinelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Konstanz",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the submission of the team KonTra to the CMCL 2021 Shared Task on eye-tracking prediction. Our system combines the embeddings extracted from a finetuned BERT model with surface, linguistic and behavioral features, resulting in an average mean absolute error of 4.22 across all 5 eyetracking measures. We show that word length and features representing the expectedness of a word are consistently the strongest predictors across all 5 eye-tracking measures.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the submission of the team KonTra to the CMCL 2021 Shared Task on eye-tracking prediction. Our system combines the embeddings extracted from a finetuned BERT model with surface, linguistic and behavioral features, resulting in an average mean absolute error of 4.22 across all 5 eyetracking measures. We show that word length and features representing the expectedness of a word are consistently the strongest predictors across all 5 eye-tracking measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The corpora ZuCo 1.0 and ZuCo 2.0 by Hollenstein et al. (2018 Hollenstein et al. ( , 2019 contain eye-tracking data collected in a series of reading tasks on English materials. For each word of the sentences, five eyetracking measures are recorded: 1) the number of fixations (nFix), 2) the first fixation duration (FFD), 3) the go-past time (GPT), 4) the total reading time (TRT), and 5) the fixation proportion (fixProp). Providing a subset of the two corpora, the CMCL 2021 Shared Task (Hollenstein et al., 2021) requires the prediction of these eye-tracking measures based on any relevant feature.",
"cite_spans": [
{
"start": 37,
"end": 61,
"text": "Hollenstein et al. (2018",
"ref_id": "BIBREF8"
},
{
"start": 62,
"end": 89,
"text": "Hollenstein et al. ( , 2019",
"ref_id": "BIBREF9"
},
{
"start": 489,
"end": 515,
"text": "(Hollenstein et al., 2021)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle the task, we conduct a series of experiments using various combinations of BERT embeddings (Devlin et al., 2018) and a rich set of surface, linguistic and behavioral features (SLB features). Our experimental setting enables a comparison of the potential of BERT and the SLB features, and allows for the explainability of the system. The best performance is achieved by the models combining word embeddings extracted from a fine-tuned BERT model with a subset of the SLB features that are the most predictive for each eye-tracking measure. Overall, our model was ranked 8th out of 13 models submitted to the shared task.",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are the following: 1) We show that training solely on SLB features provides better results than training solely on word embeddings (both pre-trained and fine-tuned ones).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2) Among the SLB features, we show that word length and linguistic features representing word expectedness consistently show the highest weight in predicting all of the 5 measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To explore the impact of linguistic and cognitive information on eye-movements in reading tasks, we extract a set of surface, linguistic, behavioral and BERT features, as listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Describing Eye-Tracking Measures",
"sec_num": "2"
},
{
"text": "Surface Features Given the common finding that surface characteristics, particularly the length of a word, influence fixation duration (Juhasz and Rayner, 2003; New et al., 2006) , we compute various surface features at word and sentence level (e.g., word and sentence length).",
"cite_spans": [
{
"start": 135,
"end": 160,
"text": "(Juhasz and Rayner, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 161,
"end": 178,
"text": "New et al., 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Describing Eye-Tracking Measures",
"sec_num": "2"
},
{
"text": "The linguistic characteristics of the words co-occurring in a sentence have an effect on eye movements (Clifton et al., 2007) . Thus, we experiment with features of syntactic and semantic nature. The syntactic features are extracted using the Stanza NLP kit (Qi et al., 2020) . For each word, we extract its part-of-speech (POS), its word type (content vs. function word), its dependency relation and its named entity type. According to Godfroid et al. (2018) and Williams and Morris (2004) , word familiarity (both local and global) has an effect on the reader's attention, i.e., readers may pay less attention on words that already occurred in previous context. In this study, we treat familiarity as word expectedness and model it using three types of semantic similarity: a) similarity of the current word w m to the whole sentence (similarity wm,s ), b) similarity of the current word to its previous word (similarity wm,w m\u22121 ), and c) similarity of the current word to all of its previous words within the current sentence (similarity wm,w 1...m\u22121 ). To compute these similarity measures, we use the BERT (base) (De- vlin et al., 2018) pre-trained model 1 and map each word to its pre-trained embedding of layer 11. We chose this layer because it mostly captures semantic properties, while the last layer has been found to be very close to the actual classification task and thus less suitable for our purpose (Jawahar et al., 2019; Lin et al., 2019) . Based on these extracted embeddings, we calculate the cosine similarities.",
"cite_spans": [
{
"start": 103,
"end": 125,
"text": "(Clifton et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 258,
"end": 275,
"text": "(Qi et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 437,
"end": 459,
"text": "Godfroid et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 464,
"end": 490,
"text": "Williams and Morris (2004)",
"ref_id": "BIBREF20"
},
{
"start": 1417,
"end": 1439,
"text": "(Jawahar et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 1440,
"end": 1457,
"text": "Lin et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": null
},
{
"text": "To measure the similarity of the current word to the whole sentence (similarity wm,s ), we take the CLS token to represent the whole sentence; we also experiment with the average token embeddings as the sentence embedding, but we find that the CLS token performs better. For measuring the similarity of the current word to all of its previous words (similarity wm,w 1...m\u22121 ), we average the embeddings of the previous words and find the cosine similarity between this average embedding and the embedding of the current word. Furthermore, semantic surprisal, i.e., the negative log-transformed conditional probability of a word given its preceding context, provides a good measure of predictability of words in context and efficiently predicts reading times (Smith and Levy, 2013), N400 amplitude and pupil dilation (Frank and Thompson, 2012) . We compute surprisal using a bigram language model trained on the lemmatized version of the first slice (roughly 31-million tokens) of the ENCOW14-AX corpus (Sch\u00e4fer and Bildhauer, 2012) . As an additional measure of word expectedness, we also include frequency scores based on the US subtitle corpus (SUBTLEX-US, Brysbaert and New, 2009) .",
"cite_spans": [
{
"start": 816,
"end": 842,
"text": "(Frank and Thompson, 2012)",
"ref_id": "BIBREF5"
},
{
"start": 1002,
"end": 1031,
"text": "(Sch\u00e4fer and Bildhauer, 2012)",
"ref_id": "BIBREF17"
},
{
"start": 1159,
"end": 1183,
"text": "Brysbaert and New, 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": null
},
{
"text": "Behavioral Features As discussed in Juhasz and Rayner (2003) and Clifton et al. (2007) , behavioral measures highly affect eye-movements in reading tasks. For each word in the sentence, we extract behavioral features from large collections of human generated values available online: age of acquisition (Kuperman et al., 2012) , prevalence (Brysbaert et al., 2019), valence, arousal, dominance (Warriner et al., 2013) and concreteness. For concreteness, we experiment both with human generated scores (concreteness human , Brysbaert et al., 2014) and automatically generated ones (concreteness auto , K\u00f6per and Schulte im Walde, 2017). All behavioral measures have been centered (mean equal to zero) and the missing values have been set to the corresponding mean value.",
"cite_spans": [
{
"start": 36,
"end": 60,
"text": "Juhasz and Rayner (2003)",
"ref_id": "BIBREF11"
},
{
"start": 65,
"end": 86,
"text": "Clifton et al. (2007)",
"ref_id": "BIBREF3"
},
{
"start": 303,
"end": 326,
"text": "(Kuperman et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 394,
"end": 417,
"text": "(Warriner et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 523,
"end": 546,
"text": "Brysbaert et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Features",
"sec_num": null
},
{
"text": "Given the success of current language models for various NLP tasks, we investigate their expressivity for human-centered tasks such as eye-tracking: each word is mapped to two types of contextualized embeddings. First, each word is mapped to its BERT (Devlin et al., 2018) embedding extracted from the pre-trained base model. To extract the second type of contextualized embedding, we fine-tune BERT on each of the five eyetracking measures. Specifically, the BERT base model 2 is fine-tuned separately 5 times, one for each of the eye-tracking measures to be predicted. Based on these fine-tuned models, we extract the embedding of each word as a fixed feature vector to be used for further experimentation. This means that in this step each word is in fact mapped to five distinct embeddings, one for each fine-tuned model. In the later experimentation, we use the respective embedding based on which measure is currently predicted (e.g., the embedding extracted from the model fine-tuned for nFix is used to predict nFix). ",
"cite_spans": [
{
"start": 251,
"end": 272,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Features",
"sec_num": null
},
{
"text": "In Experiment 1, we train the aforementioned model architectures on the full set of SLB features. Among the three models, the Random Forest Regressor achieves the best overall performance, with an average MAE across all 5 eye-tracking measures of MAE RF = 4.059 , MAE DT = 4.187, MAE LR = 4.322. To shed light on the most predictive features for each of the eye-tracking measures, we perform feature selection based on the features' weight, i.e., the impurity-based feature importance (Gini importance) computed as the normalized total reduction of the criterion brought by that feature -the higher, the more important the feature. We select features with importance higher than 0.01, resulting in a reduced SLB feature set as shown in Table 2 . This selected set is further used for Experiment 3 (see Section 3.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 736,
"end": 743,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment 1: Using Only SLB Features",
"sec_num": "3.1"
},
{
"text": "Our second experiment aims at investigating the expressivity of the contextualized BERT embeddings. We experiment with the two variants of BERT embeddings (see Section 2). In the first variant, the three models use the pre-trained BERT embeddings, while in the second variant, the models use the fine-tuned BERT embeddings. The latter means that for each of the 5 eye-tracking measures, the extracted embeddings of the corresponding finetuned model are used and 3 models are trained for each measure, with a total of 15 models. We also experiment with the predictions directly resulting from the fine-tuning tasks, but we observe that these predictions show similar performance. This finding is in line with what is reported in Devlin et al. (2018) .",
"cite_spans": [
{
"start": 728,
"end": 748,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Using Only BERT",
"sec_num": "3.2"
},
{
"text": "Extracting BERT embeddings as fixed-length features instead of using the predictions directly out of the fine-tuned model allows us to extend the BERT vectors with further features. Thus, in the last experiment, we train the 3 regression models on an extended vector, comprising the extracted 768-dimensional BERT embedding and additional dimensions for the reduced SLB feature set of Experiment 1 (see Section 3.1). Again, two variants are tested: one using the pre-trained embeddings and the other one using the fine-tuned embeddings of the corresponding model. all 5 eye-tracking measures. When we compare the predictive power of the models including only SLB features against the models trained only on BERT, we see that the embeddings are less informative than the carefully selected set of SLB features. A closer investigation of the selected SLB features in Table 2 provides interesting insights about the nature of the features and the task.",
"cite_spans": [],
"ref_spans": [
{
"start": 865,
"end": 872,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment 3: Enhancing BERT with SLB Features",
"sec_num": "3.3"
},
{
"text": "Surface Features Among all SLB features, word length is consistently the predictor with the highest weight across all 5 measures. Furthermore, word length-sentence length ratio is among the most important contributors in 4 of the 5 measures. This confirms the observation in Hollenstein et al. (2018, p. 10 ) that the probability of a word being skipped reduces as word length increases.",
"cite_spans": [
{
"start": 275,
"end": 306,
"text": "Hollenstein et al. (2018, p. 10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Linguistic Features Two features for word expectedness, i.e., frequency score and similarity wm,w m\u22121 , also show a high predictive power for all 5 measures. This confirms previous findings by Godfroid et al. (2018) and Williams and Morris (2004) . Likewise, similarity wm,w 1...m\u22121 ranks among the most important features for 4 of the 5 measures, and surprisal score for 3 of the 5 measures. Most importantly, surprisal score shows a much higher importance in predicting GPT, which indicates that encountering an unexpected word may cause a regressive reading to re-inspect previous words and thus increases the go-past time. On the other hand, the syntactic properties of a word (e.g., POS, dependency relation and named entity type) do not show any strong effect in our results. The only exception is that numeral tokens are among the most important features in predicting GPT and TRT. After a closer look into the data, we found that a majority of the numeral tokens are information about date (e.g. November 28; . The effect of such numeral tokens could probably be explained by the nature of the data, where a majority of the sentences are biographical sentences from Wikipedia (Hollenstein et al., 2018 (Hollenstein et al., , 2019 . In such data, this numeral information is highly relevant for the context.",
"cite_spans": [
{
"start": 193,
"end": 215,
"text": "Godfroid et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 220,
"end": 246,
"text": "Williams and Morris (2004)",
"ref_id": "BIBREF20"
},
{
"start": 1184,
"end": 1209,
"text": "(Hollenstein et al., 2018",
"ref_id": "BIBREF8"
},
{
"start": 1210,
"end": 1237,
"text": "(Hollenstein et al., , 2019",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Behavioral Features Dominance and age of acquisition also play a significant role in predicting GPT: as indicated in the literature (Juhasz and Rayner, 2003) , such behavioral measures have a strong impact on the processing time of words in context.",
"cite_spans": [
{
"start": 132,
"end": 157,
"text": "(Juhasz and Rayner, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We presented a system of eye-tracking feature prediction which combines BERT with a rich set of surface, linguistic and behavioral (SLB) features. Overall, our three studies indicate that including not only semantic properties that can be directly extracted from text, such as embeddings and surprisal score, but also measures reflecting behavioral (e.g., dominance and age of acquisition) and surface properties (word and sentence length) has a positive impact on the performance of our models in predicting eye-tracking data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/google-research/ bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the regression implementation from: https: //github.com/fancyerii/bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word prevalence norms for 62,000 English lemmas",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Mandera",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Samantha",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Mc-Cormick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keuleers",
"suffix": ""
}
],
"year": 2019,
"venue": "Behavior Research Methods",
"volume": "51",
"issue": "2",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert, Pawe\u0142 Mandera, Samantha F Mc- Cormick, and Emmanuel Keuleers. 2019. Word prevalence norms for 62,000 English lemmas. Be- havior Research Methods, 51(2):467-479.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Moving beyond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "New",
"suffix": ""
}
],
"year": 2009,
"venue": "Behavior Research Methods",
"volume": "41",
"issue": "4",
"pages": "977--990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert and Boris New. 2009. Moving be- yond Ku\u010dera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41(4):977-990.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Concreteness ratings for 40 thousand generally known English word lemmas",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
}
],
"year": 2014,
"venue": "Behavior Research Methods",
"volume": "46",
"issue": "3",
"pages": "904--911",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Brysbaert, Amy Beth Warriner, and Victor Ku- perman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3):904-911.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Eye movements in reading words and sentences",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Staub",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 2007,
"venue": "Eye Movements",
"volume": "",
"issue": "",
"pages": "341--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Clifton, Adrian Staub, and Keith Rayner. 2007. Eye movements in reading words and sentences. In Roger P.G. Van Gompel, Martin H. Fischer, Wayne S. Murray, and Robin L. Hill, editors, Eye Movements, pages 341-371. Elsevier, Oxford.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Early effects of word surprisal on pupil size during reading",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "34",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Frank and Robin Thompson. 2012. Early effects of word surprisal on pupil size during reading. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 34.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incidental vocabulary learning in a natural reading context: An eye-tracking study",
"authors": [
{
"first": "Aline",
"middle": [],
"last": "Godfroid",
"suffix": ""
},
{
"first": "Jieun",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Ina",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Ballard",
"suffix": ""
},
{
"first": "Yaqiong",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "Shinhye",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Abdhi",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Hyung-Jo",
"middle": [],
"last": "Yoon",
"suffix": ""
}
],
"year": 2018,
"venue": "Bilingualism",
"volume": "21",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aline Godfroid, Jieun Ahn, Ina Choi, Laura Ballard, Yaqiong Cui, Suzanne Johnston, Shinhye Lee, Ab- dhi Sarkar, and Hyung-Jo Yoon. 2018. Incidental vocabulary learning in a natural reading context: An eye-tracking study. Bilingualism, 21(3):563.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CMCL 2021 shared task on eye-tracking prediction",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Yohei",
"middle": [],
"last": "Oseki",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Emmanuele Chersoni, Cassandra Ja- cobs, Yohei Oseki, Laurent Pr\u00e9vot, and Enrico San- tus. 2021. CMCL 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cog- nitive Modeling and Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "Scientific Data",
"volume": "5",
"issue": "1",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific Data, 5(1):1-13.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.00903"
]
},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Marius Troendle, Ce Zhang, and Nicolas Langer. 2019. ZuCo 2.0: A dataset of physi- ological recordings during natural reading and anno- tation. preprint arXiv:1912.00903.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3651-3657.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barbara",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Juhasz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition",
"volume": "29",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J Juhasz and Keith Rayner. 2003. Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimen- tal Psychology: Learning, Memory, and Cognition, 29(6):1312.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "K\u00f6per",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications",
"volume": "",
"issue": "",
"pages": "24--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian K\u00f6per and Sabine Schulte im Walde. 2017. Improving verb metaphor detection by propagating abstractness to words, phrases and individual senses. In Proceedings of the 1st Workshop on Sense, Con- cept and Entity Representations and their Applica- tions, pages 24-30.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Age-of-acquisition ratings for 30,000 English words",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Stadthagen-Gonzalez",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "44",
"issue": "",
"pages": "978--990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Kuperman, Hans Stadthagen-Gonzalez, and Marc Brysbaert. 2012. Age-of-acquisition ratings for 30,000 English words. Behavior Research Meth- ods, 44(4):978-990.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Open sesame: Getting inside BERT's linguistic knowledge",
"authors": [
{
"first": "Yongjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "241--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reexamining the word length effect in visual word recognition: New evidence from the English Lexicon Project",
"authors": [
{
"first": "Boris",
"middle": [],
"last": "New",
"suffix": ""
},
{
"first": "Ferrand",
"middle": [],
"last": "Ludovic",
"suffix": ""
},
{
"first": "Pallier",
"middle": [],
"last": "Christophe",
"suffix": ""
},
{
"first": "Brysbaert",
"middle": [],
"last": "Marc",
"suffix": ""
}
],
"year": 2006,
"venue": "Psychonomic Bulletin & Review",
"volume": "13",
"issue": "1",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boris New, Ferrand Ludovic, Pallier Christophe, and Brysbaert Marc. 2006. Reexamining the word length effect in visual word recognition: New ev- idence from the English Lexicon Project. Psycho- nomic Bulletin & Review, 13(1):45-52.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building large corpora from the web using a new efficient tool chain",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Bildhauer",
"suffix": ""
}
],
"year": 2012,
"venue": "Eighth International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "486--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Sch\u00e4fer and Felix Bildhauer. 2012. Building large corpora from the web using a new efficient tool chain. In Eighth International Conference on Language Resources and Evaluation (LREC), pages 486-493.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The effect of word predictability on reading time is logarithmic",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nathaniel",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognition",
"volume": "128",
"issue": "3",
"pages": "302--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel J Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research",
"authors": [
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2013,
"venue": "Methods",
"volume": "45",
"issue": "4",
"pages": "1191--1207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 English lemmas. Behavior Re- search Methods, 45(4):1191-1207.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Eye movements, word familiarity, and vocabulary acquisition",
"authors": [
{
"first": "Rihana",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Morris",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Cognitive Psychology",
"volume": "16",
"issue": "1-2",
"pages": "312--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rihana Williams and Robin Morris. 2004. Eye move- ments, word familiarity, and vocabulary acquisition. European Journal of Cognitive Psychology, 16(1- 2):312-339.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "More than words: Word predictability, prosody, gesture and mouth movements in natural language comprehension",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Frassinelli",
"suffix": ""
},
{
"first": "Jyrki",
"middle": [],
"last": "Tuomainen",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [
"I"
],
"last": "Skipper",
"suffix": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Zhang, Diego Frassinelli, Jyrki Tuomainen, Jeremy I Skipper, and Gabriella Vigliocco. 2020. More than words: Word predictability, prosody, ges- ture and mouth movements in natural language com- prehension. preprint BioRxiv.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "The complete set of surface, linguistic and behavioral (SLB) features and the BERT features."
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table><tr><td/><td>word length (0.40), surprisal score (0.27), word length-sentence length ratio (0.06), similarity wm,s (0.04),</td></tr><tr><td>GPT</td><td>similarity wm,wm\u22121 (0.02), frequency score (0.02), stop word (0.02), similarity wm,w1...m\u22121 (0.02), numeral token (0.02),</td></tr><tr><td/><td>age of acquisition (0.01), dominance (0.01)</td></tr><tr><td>TRT</td><td>word length (0.70), frequency score (0.11), word length-sentence length ratio (0.03), numeral token (0.01), similarity wm,wm\u22121 (0.01), similarity wm,s (0.01), sentence length in characters (0.01)</td></tr><tr><td>fixProp</td><td>word length (0.84), similarity wm,wm\u22121 (0.04), frequency score (0.03), similarity wm,w1...m\u22121 (0.02)</td></tr></table>",
"type_str": "table",
"text": "Measure Feature Name nFix word length (0.81), frequency score (0.05), word length-sentence length ratio (0.01), similarity wm,wm\u22121 (0.01), surprisal score (0.01), similarity wm,w1...m\u22121 (0.01) FFD word length (0.80), frequency score (0.06), similarity wm,wm\u22121 (0.02), word length-sentence length ratio (0.02), similarity wm,w1...m\u22121 (0.02), surprisal score (0.01)"
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table><tr><td>3 Predicting Eye-Tracking Measures</td></tr><tr><td>We conduct three experiments using different fea-</td></tr><tr><td>ture combinations, and experiment with three</td></tr><tr><td>model architectures. The models' parameters are</td></tr><tr><td>experimentally defined. First, we train a Linear Re-</td></tr><tr><td>gression model (LR). Second, we train a Decision</td></tr><tr><td>Tree Regressor (DT) with the mse (Mean Squared</td></tr><tr><td>Error) criterion and a maximum depth of 7. Last,</td></tr><tr><td>we train a Random Forest Regressor (RF) with the</td></tr><tr><td>mse criterion, 15 estimators and a maximum depth</td></tr><tr><td>of 7. Before training the models, all categorical</td></tr><tr><td>feature values are one-hot-encoded and all numeric</td></tr><tr><td>values are normalized within the range [0, 1].</td></tr></table>",
"type_str": "table",
"text": "SLB features with importance \u2265 0.01. Features in each row are sorted by their importance in descending order. Features that are strong predictors in all 5 measures are marked in bold."
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td>reports the results from all experimental</td></tr><tr><td>settings on the development set and test set (80/20</td></tr><tr><td>split). Due to space limits, we only report the re-</td></tr><tr><td>sults of the best model in each configuration. Over-</td></tr><tr><td>all, combining the embeddings from the fine-tuned</td></tr><tr><td>version of BERT with the surface, linguistic and</td></tr><tr><td>behavioral features gives the best performance on</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Mean absolute errors on the development and the test set. The pre-evaluation test set results are the ones submitted to the competition. We obtained the post-evaluation results after further fine-tuning."
}
}
}
}