ACL-OCL / Base_JSON /prefixC /json /cmcl /2021.cmcl-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:16:35.214308Z"
},
"title": "LAST at CMCL 2021 Shared Task: Predicting Gaze Data During Reading with a Gradient Boosting Decision Tree Approach",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": "",
"affiliation": {
"laboratory": "Laboratoire d'analyse statistique des textes -LAST Institut de recherche en sciences psychologiques",
"institution": "Universit\u00e9 catholique de Louvain Place Cardinal Mercier",
"location": {
"addrLine": "10",
"postCode": "1348",
"settlement": "Louvain-la-Neuve",
"country": "Belgium"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A LightGBM model fed with target word lexical characteristics and features obtained from word frequency lists, psychometric data and bigram association measures has been optimized for the 2021 CMCL Shared Task on Eye-Tracking Data Prediction. It obtained the best performance of all teams on two of the five eye-tracking measures to predict, allowing it to rank first on the official challenge criterion and to outperform all deep-learning based systems participating in the challenge.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "A LightGBM model fed with target word lexical characteristics and features obtained from word frequency lists, psychometric data and bigram association measures has been optimized for the 2021 CMCL Shared Task on Eye-Tracking Data Prediction. It obtained the best performance of all teams on two of the five eye-tracking measures to predict, allowing it to rank first on the official challenge criterion and to outperform all deep-learning based systems participating in the challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes the system proposed by the Laboratoire d'analyse statistique des textes (LAST) for the Cognitive Modeling and Computational Linguistics (CMCL) Shared Task on Eye-Tracking Data Prediction. This task is receiving more and more attention due to its importance in modeling human language understanding and improving NLP technology (Hollenstein et al., 2019; Mishra and Bhattacharyya, 2018) .",
"cite_spans": [
{
"start": 348,
"end": 374,
"text": "(Hollenstein et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 375,
"end": 406,
"text": "Mishra and Bhattacharyya, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As one of the objectives of the organizers is to \"compare the capabilities of machine learning approaches to model and analyze human patterns of reading\" (https://cmclorg.github.io/ shared_task), I have chosen to adopt a generic point of view with the main objective of determining what level of performance can achieve a system derived from the one I developed to predict the lexical complexity of words and polylexical expressions (Shardlow et al., 2021) . That system was made up of a gradient boosting decision tree prediction model fed with features obtained from word frequency lists, psychometric data, lexical norms and bigram association measures. If there is no doubt that predicting lexical complexity is a different problem, one can think that the features useful for it also play a role in predicting eye movement during reading.",
"cite_spans": [
{
"start": 433,
"end": 456,
"text": "(Shardlow et al., 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The next section summarizes the main characteristics of the challenge. Then the developed system is described in detail. Finally, the results in the challenge are reported along with an analysis performed to get a better idea of the factors that affect the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The eye-tracking data for this shared task were extracted from the Zurich Cognitive Language Processing Corpus (ZuCo 1.0 and ZuCo 2.0, Hollenstein et al., 2018 Hollenstein et al., , 2020 . It contains gaze data for 991 sentences read by 18 participants during a normal reading session. The learning set consisted in 800 sentences and the test set in 191 sentences.",
"cite_spans": [
{
"start": 135,
"end": 159,
"text": "Hollenstein et al., 2018",
"ref_id": "BIBREF14"
},
{
"start": 160,
"end": 186,
"text": "Hollenstein et al., , 2020",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Task",
"sec_num": "2"
},
{
"text": "The task was to predict five eye-tracking features, averaged across all participants and scaled in the range between 0 and 100, for each word of a series of sentences: (1) the total number of fixations (nFix), (2) the duration of the first fixation (FFD), (3) the sum of all fixation durations, including regressions (TRT), (4) the sum of the duration of all fixations prior to progressing to the right, including regressions to previous words (GPT), and (5) the proportion of participants that fixated the word (fixProp). These dependent variables (DVs) are described in detail in Hollenstein et al. (2021) . The submissions were evaluated using the mean absolute error (MAE) metric and the systems were ranked according to the average MAE across all five DVs, the lowest being the best.",
"cite_spans": [
{
"start": 582,
"end": 607,
"text": "Hollenstein et al. (2021)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Task",
"sec_num": "2"
},
{
"text": "As the DVs are of different natures (number, proportion and duration), their mean and variance are very different. The mean of fixProp is 21 times greater than that of FFD and its variance 335 times. Furthermore, while nFix and fixProp were scaled independently, FFD, GPT and TRT were scaled together. For that reason, the mean and dispersion of these three measures are quite different: FFD must necessarily be less than or equal to TRT and GPT 1 . These two factors strongly affect the importance of the different DVs in the final ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Task",
"sec_num": "2"
},
{
"text": "The regression models were built by the 2.2.1 version of the LightGBM software (Ke et al., 2017) , a well-known implementation of the gradient boosting decision tree approach. This type of model has the advantage of not requiring feature preprocessing, such as a logarithmic transformation, since it is insensitive to monotonic transformations, and of including many parameters allowing a very efficient overfit control. It also has the advantage of being able to directly optimize the MAE.",
"cite_spans": [
{
"start": 79,
"end": 96,
"text": "(Ke et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure to Build the Models",
"sec_num": "3.1"
},
{
"text": "Sentence preprocessing and feature extraction as well as the post-processing of the LightGBM predictions were performed using custom SAS programs running in SAS University (still freely available for research at https://www.sas.com/en_us/ software/university-edition.html). Sentences were first lemmatized by the TreeTagger (Schmid, 1994) to get the lemma and POS-tag of each word. Special care was necessary to match the TreeTagger tokenization with the Zuco original one. Punctuation marks and other similar symbols (e.g., \"(\" or \"$\") were simply disregarded as they were always bound to a word in the tokens to predict. The attribution to the words of the values on the different lists was carried out in two stages: on the basis of the spelling form when it is found in the list or of the lemma if this is not the case.",
"cite_spans": [
{
"start": 324,
"end": 338,
"text": "(Schmid, 1994)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure to Build the Models",
"sec_num": "3.1"
},
{
"text": "The features used in the final models as well as the LightGBM parameters were optimized by a 5-fold cross validation procedure, using the sentence and not the token as the sampling unit. The number of boosting iterations was set by using the LightGBM early stopping procedure which stops training when the MAE on the validation fold does not improve in the last 200 rounds. The predicted values which were outside the [0, 100] interval were brought back in this one, which makes it possible to improve the MAE very slightly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure to Build the Models",
"sec_num": "3.1"
},
{
"text": "To predict the five DVs, five different models were trained. The only differences between them were in the LightGBM parameters. There were thus since one can be larger or smaller than the other in a significant number of cases. all based on exactly the same features, which are described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Target Word Length. The length in characters of the preceding word, the target word and the following one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Target Word Position. The position of the word in the sentence encoded in two ways: the rank of the word going from 1 to the sentence total number of words and the ratio between the rank of the word and the total number of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Target Word POS-tag and Lemma. The POStag and lemma for the target word and the preceding one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Corpus Frequency Features. Frequencies in corpora of words were either calculated from a corpus or extracted from lists provided by other researchers. The following seven features have been used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The (unlemmatized) word frequencies in the British National Corpus (BNC, http:// www.natcorp.ox.ac.uk/).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The Facebook frequency norms for American English and British English in Herdagdelen and Marelli (2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The Rovereto Twitter Corpus frequency norms (Herdagdelen and Marelli, 2017 ).",
"cite_spans": [
{
"start": 46,
"end": 76,
"text": "(Herdagdelen and Marelli, 2017",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The USENET Orthographic Frequencies from Shaoul and Chris (2006) .",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "Shaoul and Chris (2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The Hyperspace Analogue to Language (HAL) frequency norms provided by (Balota et al., 2007) for more that 40,000 words.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Balota et al., 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 The frequency word list derived from Google's ngram corpora available at https://github.com/hackerb9/ gwordlist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Features from Lexical Norms. The lexical norms of Age of Acquisition and Familiarity were taken from the Glasgow Norms which contain judges' assessment of 5,553 English words (Scott et al., 2019) .",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Scott et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Lexical Characteristics and Behavioral Measures from ELP. Twenty-three indices were extracted from the English Lexicon Project (ELP, Balota et al., 2007; Yarkoni et al., 2008) , a database that contains, for more than 40,000 words, reaction time and accuracy during lexical decision and naming tasks, made by many participants, as well as lexical characteristics (https://elexicon. wustl.edu/). Eight indices come from the behavioral measures, four for each task: average response latencies (raw and standardized), standard deviations, and accuracies. Fourteen indices come from the \"Orthographic, Phonological, Phonographic, and Levenshtein Neighborhood Metrics\" section of the dataset. These are all the metrics provided except Freq_Greater, Freq_G_Mean, Freq_Less, Freq_L_Mean, and Freq_Rel. These are variables whose initial analyzes showed that they were redundant with those selected. The last feature is the average bigram count of a word.",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "Balota et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 154,
"end": 175,
"text": "Yarkoni et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Bigram Association Measures. These features indicate the degree of association between the target word and the one that precedes it according to a series of indices calculated on the basis of the frequency in a reference corpus (i.e., the BNC) of the bigram and that of the two words that compose it, using the following association measures (AMs): pointwise mutual information and t-score (Church and Hanks, 1990) , z-score (Berry-Rogghe, 1973), log-likelihood Chi-square test (Dunning, 1993) , simple-ll (Evert, 2009) , Dice coefficient (Kilgarriff et al., 2014) and the two delta-p (Kyle et al., 2018) . Most of the formulas to compute these AMs are also provided in Evert (2009) and in Pecina (2010) . As these features mix together the assets of both collocations (by using association scores) and ngrams (by using contiguous pairs of words), Bestgen and Granger (2014) refer to them as collgrams. They make it possible not to rely exclusively on the frequency of the bigram in the corpus, which can be misleading because a bigram may be observed frequently, not because of its phraseological nature, but because it is made up of very frequent words (Bestgen, 2018) . Conversely, a relatively rare bigram, composed of rare words, may be typical of the language. Since word frequency is already accounted for by the corpus frequency features, it was desirable to employ indices that reduce the impact of this factor. Originating in works in lexicography and foreign language learning (Church and Hanks, 1990; Durrant and Schmitt, 2009; Bestgen, 2017 Bestgen, , 2019 , they have recently shown their usefulness in predicting the lexical complexity of multi-word expressions (Bestgen, 2021) . In the present case, it is assumed that these indices can serve as a proxy of the next word predictability (Kliegl et al., 2004) .",
"cite_spans": [
{
"start": 390,
"end": 414,
"text": "(Church and Hanks, 1990)",
"ref_id": "BIBREF7"
},
{
"start": 478,
"end": 493,
"text": "(Dunning, 1993)",
"ref_id": "BIBREF8"
},
{
"start": 506,
"end": 519,
"text": "(Evert, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 585,
"end": 604,
"text": "(Kyle et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 670,
"end": 682,
"text": "Evert (2009)",
"ref_id": "BIBREF10"
},
{
"start": 690,
"end": 703,
"text": "Pecina (2010)",
"ref_id": "BIBREF21"
},
{
"start": 1155,
"end": 1170,
"text": "(Bestgen, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 1488,
"end": 1512,
"text": "(Church and Hanks, 1990;",
"ref_id": "BIBREF7"
},
{
"start": 1513,
"end": 1539,
"text": "Durrant and Schmitt, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 1540,
"end": 1553,
"text": "Bestgen, 2017",
"ref_id": "BIBREF2"
},
{
"start": 1554,
"end": 1569,
"text": "Bestgen, , 2019",
"ref_id": "BIBREF4"
},
{
"start": 1677,
"end": 1692,
"text": "(Bestgen, 2021)",
"ref_id": "BIBREF5"
},
{
"start": 1802,
"end": 1823,
"text": "(Kliegl et al., 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Feature coverage. Some words to predict are not present in these lists and the corresponding score is thus missing. Based on the complete dataset provided by the organizers, it happens in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 1% (Google ngram) to 17% (Facebook and Twitter) of the tokens for the corpus frequency features,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 9% for the ELP Lexical Characteristics, but a few features have as much as 41% missing values,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 11% for the ELP Behavioral Measures,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "\u2022 18% for the Bigram AMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "In total, sixteen tokens have missing values for all these features (Corpus Frequency, Lexical Characteristics and Behavioral Measures from ELP, and Bigram Association Measures). These tokens have however received values for the length and position features. All the missing values were handled by LightGBM default procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "During the test phase, teams were allowed to submit three runs. My three submissions were all based on the features described above, the only differences between them resulting from changes in the LigthGBM parameters. They were set at their default values except those shown in Table 1 . The official performances of the top five challenge submissions are given in The first submission was based on the parameters selected during the development phase. They were identical for the five DVs. For the other two submissions, a random grid search coded in python was used to try optimizing the parameters independently for each DV. The parameter space for this first random search is provided in Appendix A. As the measure of the challenge is the MAE averaged across the five DVs and as the system MAE for fixProp was up to 15 times higher than that of the other DVs, the optimized parameters for this variable were selected. Additional analyzes showed that they also made it possible to improve performance on the four other DVs. Their values are given in Table 1 . Certain initial choices were only slightly modified. The value of other parameters such as the maximum number of leaves and the feature fraction were markedly increased, suggesting that the risk of overfit was relatively low (see https://lightgbm.readthedocs.io/ en/latest/Parameters-Tuning.html).",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1053,
"end": 1060,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Models Submitted to the Challenge",
"sec_num": "4.1"
},
{
"text": "In this system, the number of iterations was optimized (thanks to the early stopping procedure) for each DV and sets at the fourth highest value: 3,740 for nFix, 3,829 for TRT, 2,861 for GPT, 3,497 for FFD, and 3,305 for fixProp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Submitted to the Challenge",
"sec_num": "4.1"
},
{
"text": "For the third submission, a new round of random optimization was conducted by evaluating parameter values close to those selected for Run 2, independently for each DV. As it only got slightly better performance than Run 2, these parameter values are not shown to save space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Submitted to the Challenge",
"sec_num": "4.1"
},
{
"text": "As shown in Table 2 , Runs 2 and 3 ranked at the first 2 places of the challenge. This result was largely due to their better performance for fixProp since the TALEP system, second in the challenge, achieved significantly better performance for three of the five DVs, but these have less impact on the official measurement. An analysis, carried out after the end of the challenge, showed that the system would not have been more effective (average MAE of 3.8138) if, during the first optimization step, a specific model for each DV had been selected.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Models Submitted to the Challenge",
"sec_num": "4.1"
},
{
"text": "Using Pearson's linear correlation coefficient as a measure of effectiveness, which is unaffected by the differences in means and variability between the five DVs, Run 3 obtains an average r of 0.812 on the test set (min = 0.792 for GPT; max = 0.838 for fixProp). This value is relatively high, but it can only really be interpreted by taking into account the reliability of the average real eye-tracking feature values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models Submitted to the Challenge",
"sec_num": "4.1"
},
{
"text": "The first part of Table 3 presents the main results of an ablation procedure aimed at examining the impact of the different types of features on the system performance. It gives the average MAE as well as the difference in percentage between each system and the best run for the average MAE and for the five DVs. It must be first stressed that all features based on lemmas and POS-tag, the two Glasgow norms and the length of the token that follows the target are useless for predicting the test set since without them the system achieves a MAE of 3.8134. They are thus discarded in all the ablation analyses. The target's positions in the sentence and the length features are clearly essential. Among the features resulting from corpora and behavioral data, it is the bigram association measures and the frequencies in the corpora that are the most useful.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Feature Usefulness",
"sec_num": "4.2"
},
{
"text": "Generally speaking, the feature sets have comparable utility for all DVs. However, we observe that the position in the sentences is particularly important for predicting GPT while the length of the target is more useful for nFix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Usefulness",
"sec_num": "4.2"
},
{
"text": "The second part of Table 3 presents an analysis of the utility of optimizing the LightGBM parameters, based on the best system. Optimizing RMSE instead of MAE is especially penalizing for GPT. eters is particularly penalizing when RMSE is the criterion.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Feature Usefulness",
"sec_num": "4.2"
},
{
"text": "A final question concerns the benefits of employing LightGBM instead of another regression algorithm when the proposed features are used. To try to provide at least a partial answer, I trained a multiple linear regression model on the basis of the features used, while adding for each feature, for which the calculation was possible, a second feature containing the logarithm of the initial value. I replaced the missing data with 0, which is probably not optimal. A stepwise regression procedure with a threshold to enter sets at p = 0.01 and a threshold to exit sets at p = 0.05 was employed to construct for each DV a model on the learning set and apply it to the test set. The results obtained are given in the second to last row of Table 3 . The performances are clearly less good. It is even worse than the performance level of a LightGBM model based only on the length and position features (see the last row of Table 3 ). This regression system would have been ranked 10th in the challenge.",
"cite_spans": [],
"ref_spans": [
{
"start": 737,
"end": 744,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 919,
"end": 926,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Feature Usefulness",
"sec_num": "4.2"
},
{
"text": "The system proposed for the 2021 CMCL Shared Task on Eye-Tracking Data Prediction was particularly effective, obtaining the first place in the challenge, but it must be kept in mind that the system that came second is superior to it for three of the five DVs. The analyzes carried out to understand its pros and cons indicate that optimizing the Light-GBM parameters is quite beneficial to it as well as the different sets of features derived from corpora and behavioral data, including bigram AMs which, to my knowledge, have never been employed for this type of task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "It would have been interesting to relate these observations to the psycholinguistic literature on the factors that influence eye fixations, but this is unfortunately not possible here, for lack of space. In addition, this would first require deepening the ablation analyzes by simultaneously considering several feature sets. For instance, the lack of usefulness of the POS-tags could simply result from the links (at least partial) between them and the frequency and length of the tokens. Likewise, some of the bigram AMs are relatively sensitive to the frequency of the words that compose them (e.g., the t-score favors frequent bigrams which are usually composed of frequent words). It is thus highly probable that some of the features in the different sets (frequencies, behavioral data...) are redundant and can be removed without impairing the performance of the system. This is a potential development path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The relation between TRT and GPT is not obvious to me",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author wishes to thank the organizers of this shared task for putting together this valuable event and the reviewers for their very constructive comments. He is a Research Associate of the Fonds de la Recherche Scientifique -FNRS (F\u00e9d\u00e9ration Wallonie Bruxelles de Belgique). Computational resources were provided by the supercomputing facilities of the UCLouvain (CISM/UCL) and the Consortium des Equipements de Calcul Intensif en F\u00e9d\u00e9ration Wallonie Bruxelles (CECI).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "At the request of a reviewer, the parameter space for the first random search is provided below. Those for the second random search are not provided as they did not allow to really improve the performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "'max_bin': [16, 32, 48, 64, 80, 96, 112, 128, 160, 192, 224, 256] , 'min_data_in_bin': [2, 3, 4, 5, 6, 8, 10, 12, 15, 20] , 'num_leaves': [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 18, 21, 25, 30] , 'learning_rate': [0.005,0.007,0.009, 0.011,0.014,0.018,0.022,0.026,0.03, 0.035,0.05], 'min_data_in_leaf': [2,3,4,5,6,7,8,9,10, 11,12,13,15,18,21,25,30], 'max_depth': [3,4,5,6,7,8,9,10,11,12,13, -1], 'feature_fraction': list(np.linspace( 0.01, 0.90, 91)), 'bagging_freq': list(range(3, 7, 1)), 'bagging_fraction': list(np.linspace( 0.50, 0.90, 9)) }",
"cite_spans": [
{
"start": 11,
"end": 15,
"text": "[16,",
"ref_id": null
},
{
"start": 16,
"end": 19,
"text": "32,",
"ref_id": null
},
{
"start": 20,
"end": 23,
"text": "48,",
"ref_id": null
},
{
"start": 24,
"end": 27,
"text": "64,",
"ref_id": null
},
{
"start": 28,
"end": 31,
"text": "80,",
"ref_id": null
},
{
"start": 32,
"end": 35,
"text": "96,",
"ref_id": null
},
{
"start": 36,
"end": 40,
"text": "112,",
"ref_id": null
},
{
"start": 41,
"end": 45,
"text": "128,",
"ref_id": null
},
{
"start": 46,
"end": 50,
"text": "160,",
"ref_id": null
},
{
"start": 51,
"end": 55,
"text": "192,",
"ref_id": null
},
{
"start": 56,
"end": 60,
"text": "224,",
"ref_id": null
},
{
"start": 61,
"end": 65,
"text": "256]",
"ref_id": null
},
{
"start": 87,
"end": 90,
"text": "[2,",
"ref_id": null
},
{
"start": 91,
"end": 93,
"text": "3,",
"ref_id": null
},
{
"start": 94,
"end": 96,
"text": "4,",
"ref_id": null
},
{
"start": 97,
"end": 99,
"text": "5,",
"ref_id": null
},
{
"start": 100,
"end": 102,
"text": "6,",
"ref_id": null
},
{
"start": 103,
"end": 105,
"text": "8,",
"ref_id": null
},
{
"start": 106,
"end": 109,
"text": "10,",
"ref_id": null
},
{
"start": 110,
"end": 113,
"text": "12,",
"ref_id": null
},
{
"start": 114,
"end": 117,
"text": "15,",
"ref_id": null
},
{
"start": 118,
"end": 121,
"text": "20]",
"ref_id": null
},
{
"start": 138,
"end": 141,
"text": "[4,",
"ref_id": null
},
{
"start": 142,
"end": 144,
"text": "5,",
"ref_id": null
},
{
"start": 145,
"end": 147,
"text": "6,",
"ref_id": null
},
{
"start": 148,
"end": 150,
"text": "7,",
"ref_id": null
},
{
"start": 151,
"end": 153,
"text": "8,",
"ref_id": null
},
{
"start": 154,
"end": 156,
"text": "9,",
"ref_id": null
},
{
"start": 157,
"end": 160,
"text": "10,",
"ref_id": null
},
{
"start": 161,
"end": 164,
"text": "11,",
"ref_id": null
},
{
"start": 165,
"end": 168,
"text": "12,",
"ref_id": null
},
{
"start": 169,
"end": 172,
"text": "13,",
"ref_id": null
},
{
"start": 173,
"end": 176,
"text": "15,",
"ref_id": null
},
{
"start": 177,
"end": 180,
"text": "18,",
"ref_id": null
},
{
"start": 181,
"end": 184,
"text": "21,",
"ref_id": null
},
{
"start": 185,
"end": 188,
"text": "25,",
"ref_id": null
},
{
"start": 189,
"end": 192,
"text": "30]",
"ref_id": null
},
{
"start": 301,
"end": 388,
"text": "[2,3,4,5,6,7,8,9,10, 11,12,13,15,18,21,25,30], 'max_depth': [3,4,5,6,7,8,9,10,11,12,13,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "param_grid = {",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Simpson, and Rebecca Treiman",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Balota",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"J"
],
"last": "Yap",
"suffix": ""
},
{
"first": "Keith",
"middle": [
"A"
],
"last": "Hutchison",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cortese",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Kessler",
"suffix": ""
},
{
"first": "Bjorn",
"middle": [],
"last": "Loftis",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Neely",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"B"
],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Behavior Research Methods",
"volume": "39",
"issue": "",
"pages": "445--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Balota, Melvin J. Yap, Keith A. Hutchi- son, Michael J. Cortese, Brett Kessler, Bjorn Loftis, James H. Neely, Douglas L. Nelson, Greg B. Simp- son, and Rebecca Treiman. 2007. The English lex- icon project. Behavior Research Methods, 39:445- 459.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The computation of collocations and their relevance in lexical studies",
"authors": [
{
"first": "L",
"middle": [
"M"
],
"last": "Godelieve",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berry-Rogghe",
"suffix": ""
}
],
"year": 1973,
"venue": "The Computer and Literary Studies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Godelieve L. M. Berry-Rogghe. 1973. The compu- tation of collocations and their relevance in lexical studies. In Adam J Aitken, Richard W. Bailey, and Neil Hamilton-Smith, editors, The Computer and Literary Studies. Edinburgh University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Beyond single-word measures: L2 writing assessment, lexical richness and formulaic competence",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
}
],
"year": 2017,
"venue": "System",
"volume": "69",
"issue": "",
"pages": "65--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen. 2017. Beyond single-word measures: L2 writing assessment, lexical richness and formu- laic competence. System, 69:65-78.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating the frequency threshold for selecting lexical bundles by means of an extension of the Fisher's exact test",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
}
],
"year": 2018,
"venue": "Corpora",
"volume": "13",
"issue": "",
"pages": "205--228",
"other_ids": {
"DOI": [
"10.3366/cor.2018.0144"
]
},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen. 2018. Evaluating the frequency thresh- old for selecting lexical bundles by means of an ex- tension of the Fisher's exact test. Corpora, 13:205- 228.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Evaluation de textes en anglais langue \u00e8trang\u00e8re et s\u00e9ries phras\u00e9ologiques : comparaison de deux proc\u00e9dures automatiques librement accessibles",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "24",
"issue": "",
"pages": "81--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen. 2019. Evaluation de textes en anglais langue \u00e8trang\u00e8re et s\u00e9ries phras\u00e9ologiques : com- paraison de deux proc\u00e9dures automatiques librement accessibles. Revue fran\u00e7aise de linguistique ap- pliqu\u00e9e, 24:81-94.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "LAST at SemEval-2021 Task 1: improving multi-word complexity prediction using bigram association measures",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of SemEval-2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen. 2021. LAST at SemEval-2021 Task 1: improving multi-word complexity prediction us- ing bigram association measures. In Proceedings of SemEval-2021.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quantifying the development of phraseological competence in L2 English writing: An automated approach",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
},
{
"first": "Sylviane",
"middle": [],
"last": "Granger",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Second Language Writing",
"volume": "26",
"issue": "",
"pages": "28--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen and Sylviane Granger. 2014. Quan- tifying the development of phraseological compe- tence in L2 English writing: An automated approach. Journal of Second Language Writing, 26:28-41.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics, 19(1):61-74.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "To what extent do native and non-native writers make use of collocations?",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Durrant",
"suffix": ""
},
{
"first": "Norbert",
"middle": [],
"last": "Schmitt",
"suffix": ""
}
],
"year": 2009,
"venue": "International Review of Applied Linguistics in Language Teaching",
"volume": "47",
"issue": "",
"pages": "157--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Durrant and Norbert Schmitt. 2009. To what ex- tent do native and non-native writers make use of collocations? International Review of Applied Lin- guistics in Language Teaching, 47:157-177.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Corpora and collocations",
"authors": [
{
"first": "Stefan",
"middle": [
"Evert"
],
"last": "",
"suffix": ""
}
],
"year": 2009,
"venue": "Corpus Linguistics. An International Handbook",
"volume": "",
"issue": "",
"pages": "1211--1248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Evert. 2009. Corpora and collocations. In Anke L\u00fcdeling and Merja Kyt\u00f6, editors, Corpus Linguis- tics. An International Handbook, pages 1211-1248. Mouton de Gruyter.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Social media and language processing: How Facebook and Twitter provide the best frequency estimates for studying word recognition",
"authors": [
{
"first": "Amac",
"middle": [],
"last": "Herdagdelen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "41",
"issue": "",
"pages": "976--995",
"other_ids": {
"DOI": [
"10.1111/cogs.12392"
]
},
"num": null,
"urls": [],
"raw_text": "Amac Herdagdelen and Marco Marelli. 2017. So- cial media and language processing: How Facebook and Twitter provide the best frequency estimates for studying word recognition. Cognitive Science, 41:976-995.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "CMCL 2021 shared task on eye-tracking prediction",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Yohei",
"middle": [],
"last": "Oseki",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Emmanuele Chersoni, Cassandra Ja- cobs, Yohei Oseki, Laurent Pr\u00e9vot, and Enrico San- tus. 2021. CMCL 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cog- nitive Modeling and Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "CogniVal: A framework for cognitive word embedding evaluation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "De La Torre",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "538--549",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1050"
]
},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Antonio de la Torre, Nicolas Langer, and Ce Zhang. 2019. CogniVal: A framework for cognitive word embedding evaluation. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 538-549, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1038/sdata.2018.291"
]
},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific Data. 5:180291, 5(180291):1-13.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "2020",
"issue": "",
"pages": "138--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Marius Troendle, Ce Zhang, and Nicolas Langer. 2020. ZuCo 2.0: A dataset of phys- iological recordings during natural reading and an- notation. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 138-146. European Language Resources Association.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "LightGBM: A highly efficient gradient boosting decision tree",
"authors": [
{
"first": "Guolin",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Taifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weidong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Qiwei",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "3146--3154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Informa- tion Processing Systems 30, pages 3146-3154. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Michelfeit, Pavel Rychl\u00fd, and V\u00edt Suchomel. 2014. The Sketch Engine: ten years on. Lexicography",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "V\u00edt",
"middle": [],
"last": "Baisa",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Bu\u0161ta",
"suffix": ""
},
{
"first": "Milo\u0161",
"middle": [],
"last": "Jakub\u00ed\u010dek",
"suffix": ""
},
{
"first": "Vojt\u011bch",
"middle": [],
"last": "Kov\u00e1\u0159",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "7--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff, V\u00edt Baisa, Jan Bu\u0161ta, Milo\u0161 Jakub\u00ed\u010dek, Vojt\u011bch Kov\u00e1\u0159, Jan Michelfeit, Pavel Rychl\u00fd, and V\u00edt Suchomel. 2014. The Sketch En- gine: ten years on. Lexicography, 1(1):7-36.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Length, frequency, and predictability effects of words on eye movements in reading",
"authors": [
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Grabner",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rolfs",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engbert",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Cognitive Psychology",
"volume": "16",
"issue": "",
"pages": "262--284",
"other_ids": {
"DOI": [
"10.1080/09541440340000213"
]
},
"num": null,
"urls": [],
"raw_text": "Reinhold Kliegl, Ellen Grabner, Martin Rolfs, and Ralf Engbert. 2004. Length, frequency, and predictabil- ity effects of words on eye movements in reading. European Journal of Cognitive Psychology, 16:262- 284.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The tool for the automatic analysis of lexical sophistication (TAALES): version 2.0",
"authors": [
{
"first": "Kristopher",
"middle": [],
"last": "Kyle",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Crossley",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Berger",
"suffix": ""
}
],
"year": 2018,
"venue": "Behavior Research Methods",
"volume": "50",
"issue": "",
"pages": "1030--1046",
"other_ids": {
"DOI": [
"10.3758/s13428-017-0924-4"
]
},
"num": null,
"urls": [],
"raw_text": "Kristopher Kyle, Scott Crossley, and Cynthia Berger. 2018. The tool for the automatic analysis of lexi- cal sophistication (TAALES): version 2.0. Behavior Research Methods, 50:1030-1046.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Applications of Eye Tracking in Language Processing and Other Areas",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "23--46",
"other_ids": {
"DOI": [
"10.1007/978-981-13-1516-9_2"
]
},
"num": null,
"urls": [],
"raw_text": "Abhijit Mishra and Pushpak Bhattacharyya. 2018. Ap- plications of Eye Tracking in Language Processing and Other Areas, pages 23-46. Springer Singapore, Singapore.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lexical association measures and collocation extraction",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
}
],
"year": 2010,
"venue": "Language Resources & Evaluation",
"volume": "44",
"issue": "",
"pages": "137--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Pecina. 2010. Lexical association measures and collocation extraction. Language Resources & Eval- uation, 44:137-158.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Probabilistic part-of-speech tagging using decision trees",
"authors": [
{
"first": "Helmutt",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1994,
"venue": "International Conference on New Methods in Language Processing",
"volume": "",
"issue": "",
"pages": "44--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmutt Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Con- ference on New Methods in Language Processing, pages 44-49.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Glasgow norms: Ratings of 5,500 words on nine scales",
"authors": [
{
"first": "Graham",
"middle": [
"G"
],
"last": "Scott",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Keitel",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Becirspahic",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sara",
"middle": [
"C"
],
"last": "Sereno",
"suffix": ""
}
],
"year": 2019,
"venue": "Behavior Research Methods",
"volume": "51",
"issue": "",
"pages": "1258--1270",
"other_ids": {
"DOI": [
"10.3758/s13428-018-1099-3"
]
},
"num": null,
"urls": [],
"raw_text": "Graham G. Scott, Anne Keitel, Marc Becirspahic, Bo Yao, and Sara C. Sereno. 2019. The Glasgow norms: Ratings of 5,500 words on nine scales. Be- havior Research Methods, 51:1258-1270.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "USENET orthographic frequencies for 111",
"authors": [
{
"first": "Cyrus",
"middle": [],
"last": "Shaoul",
"suffix": ""
},
{
"first": "Westbury",
"middle": [],
"last": "Chris",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyrus Shaoul and Westbury Chris. 2006. USENET or- thographic frequencies for 111,627 English words.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SemEval-2021 task 1: Lexical complexity prediction",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Shardlow",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Shardlow, Richard Evans, Gustavo Paetzold, and Marcos Zampieri. 2021. SemEval-2021 task 1: Lexical complexity prediction. In Proceedings of the 14th International Workshop on Semantic Evalu- ation (SemEval-2021).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Moving beyond Coltheart's N: A new measure of orthographic similarity",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Yarkoni",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Balota",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Yap",
"suffix": ""
}
],
"year": 2008,
"venue": "Psychonomic Bulletin & Review",
"volume": "16",
"issue": "",
"pages": "971--979",
"other_ids": {
"DOI": [
"10.3758/PBR.15.5.971"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Yarkoni, David A. Balota, and Melvin Yap. 2008. Moving beyond Coltheart's N: A new measure of or- thographic similarity. Psychonomic Bulletin & Re- view, 16:971-979.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"text": "LightGBM parameters for the first two runs.",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td>Team</td><td colspan=\"2\">Run Mean</td><td>nFix</td><td>FFD GPT TRT fixProp</td></tr><tr><td>LAST</td><td>3</td><td colspan=\"3\">3.8134 3.879 0.655 2.197 1.524 10.812</td></tr><tr><td>LAST</td><td>2</td><td colspan=\"3\">3.8159 3.886 0.655 2.199 1.523 10.817</td></tr><tr><td>TALEP</td><td>1</td><td colspan=\"3\">3.8328 3.761 0.662 2.180 1.486 11.076</td></tr><tr><td>LAST</td><td>1</td><td colspan=\"3\">3.8664 3.943 0.662 2.237 1.545 10.944</td></tr><tr><td colspan=\"2\">TorontoCL 2</td><td colspan=\"3\">3.9287 3.944 0.671 2.227 1.516 11.286</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Performance (MAE) for the five best runs submitted to the challenge. Best scores are bolded.",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Performance (MAE) of different system versions and deviation (%) from the best run (M AE = 3.813). Minimum and maximum values across DVs for each row are bolded.",
"content": "<table/>",
"html": null
}
}
}
}