Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U19-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:07:48.002583Z"
},
"title": "Readability of Twitter Tweets for Second Language Learners",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Jacob",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RMIT University -School of Science",
"location": {
"addrLine": "124 La Trobe St Melbourne VIC",
"postCode": "3000"
}
},
"email": "[email protected]"
},
{
"first": "Alexandra",
"middle": [
"L"
],
"last": "Uitdenbogerd",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RMIT University -School of Science",
"location": {
"addrLine": "124 La Trobe St Melbourne VIC",
"postCode": "3000"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Optimal language acquisition via reading requires the learners to read slightly above their current language skill level. Identifying material at the right level is the essential role of automatic readability measurement. Short message platforms such as Twitter offer the opportunity for language practice while reading about current topics and engaging in conversation in small doses, and can be filtered according to linguistic criteria to suit the learner. In this research, we explore how readable tweets are for English language learners and which factors contribute to their readability. With participants from six language groups, we collected 14,659 data points, each representing a tweet from a pool of 4100 tweets, and a judgement of perceived readability. Traditional readability measures and features failed on the data-set, but demographic data showed that judgements were largely genuine and reflected reported language skill, which is consistent with other recent studies. We report on the properties of the data set and implications for future research.",
"pdf_parse": {
"paper_id": "U19-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "Optimal language acquisition via reading requires the learners to read slightly above their current language skill level. Identifying material at the right level is the essential role of automatic readability measurement. Short message platforms such as Twitter offer the opportunity for language practice while reading about current topics and engaging in conversation in small doses, and can be filtered according to linguistic criteria to suit the learner. In this research, we explore how readable tweets are for English language learners and which factors contribute to their readability. With participants from six language groups, we collected 14,659 data points, each representing a tweet from a pool of 4100 tweets, and a judgement of perceived readability. Traditional readability measures and features failed on the data-set, but demographic data showed that judgements were largely genuine and reflected reported language skill, which is consistent with other recent studies. We report on the properties of the data set and implications for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since the first half of the twentieth century researchers have analysed texts to determine their readability, that is, how easy the text is to read and comprehend, often expressed as levels of linguistic education, knowledge, age or experience. The findings around readability have been applied to education for selecting appropriate reading material for students, in communication with governmental bodies to reach a higher number of citizens (Temnikova et al., 2015) and in marketing/public relations of companies to increase the reach of their materials (Risius and Pape, 2016) .",
"cite_spans": [
{
"start": 444,
"end": 468,
"text": "(Temnikova et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 557,
"end": 580,
"text": "(Risius and Pape, 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "While there have been many studies on readability of regular text for students and foreign language learners, the same is not true for the microblog text genre. Twitter is a popular platform for reading current topics and engaging in social interaction, providing a cross-cultural, crossinterest, and cross-language platform for reading social media posts of up to 280 characters in length, and a filtered feed has potential as a source of regular reading material for learners. While the Flesch readability of English language tweets has been analysed to discover demographic trends and to compare them to other modern text genres (Davenport and DeLine, 2014), and judgements of tweet clarity for emergency communication has been researched (Temnikova et al., 2015) , to our knowledge tweets have not been studied in relation to English as an Additional Language (EAL, a term that recognises that it may be a third language, for example).",
"cite_spans": [
{
"start": 742,
"end": 766,
"text": "(Temnikova et al., 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "Our aim was to extend readability research to tweets for EAL learners. Tweets are different from ordinary text due to their short length, hashtags, mentions (user identifiers preceded by an @ symbol), links, and other non-standard text tokens that they contain. This poses challenges for traditional readability formulae, which assume regular text, as found in books and periodicals. This study evaluates the applicability of readability formulae and the influence of the unique expressions used in tweets such as hashtags, mentions and links. The goal was to find predictors that increase accuracy in classifying Twitter tweets to language reading levels, which will assist users to find more appropriate material for their reading abilities and aid institutions to adjust their published tweets for foreign language reader target audiences. The Flesch formula shows an inverse relationship between readability and the number of syllables per word (lexical complexity) and the number of words per sentence (grammatical complexity). This simple measure has become a standard for text analysis in other fields of research, and is often used as a baseline for readability research, hence we include it in our study. Dale-Chall (1948) is another user-derived readability measure, based on children with English as a first language:",
"cite_spans": [
{
"start": 1214,
"end": 1231,
"text": "Dale-Chall (1948)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "0.1579 dif f icult words words * 100 +0.0496 words sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "Dale and Chall's research determined that the percentage of difficult words in a given text and the number of words per sentence influence readability. This formula assumes every word not on a list of 3000 words a fourth-grade American student should be familiar with is difficult. It would be interesting to see the interaction between the Dale-Chall formula and research based on the findings of Uitdenbogerd (2005) , which show that cognates (words that are same or very similar between the native language and the foreign language of study) influence the understanding of sentences for students of foreign languages.",
"cite_spans": [
{
"start": 398,
"end": 417,
"text": "Uitdenbogerd (2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "Most readability measures and indexes are only considered valid for text samples with a minimum number of words or sentences (Collins-Thompson and Callan, 2004; Homan et al., 1994) , and therefore not intended for typical tweet text. However, we include the above formulae and related classic readability features in this initial study.",
"cite_spans": [
{
"start": 125,
"end": 160,
"text": "(Collins-Thompson and Callan, 2004;",
"ref_id": "BIBREF0"
},
{
"start": 161,
"end": 180,
"text": "Homan et al., 1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "2"
},
{
"text": "Davenport and DeLine (2014) studied the readability of a corpus of 17.4 Million tweets. They modified the Flesch formula by treating each tweet as a single sentence, due to their brevity and unconventional punctuation. This approach may no longer be adequate for a Twitter corpus, given the new character limit of 280 characters for tweets. Temnikova et al. (2015) analysed the text difficulty of emergency messages on social media including Twitter. They used crowd-sourcing (CrowdFlower) to present a questionnaire of 500 tweets to participants, who rated them as one of very clear, needs improvement, or very unclear. Additionally, participants could suggest how to write a more understandable version of the tweet. Amongst the resulting recommendations are to use easy vocabulary and short complete sentences, exclude mentions, and minimise hashtag use. Even though the resulting recommendations appear to be valid, it is unclear what the background of the participants was, which can impact how text is perceived. In contrast, for our study, we selected and recorded the background of participants from specific populations.",
"cite_spans": [
{
"start": 341,
"end": 364,
"text": "Temnikova et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-related Research",
"sec_num": "3.2"
},
{
"text": "There are generally two types of research design for predicting readability. The first models reading difficulty using data collected from human participants. The readability measure by Kincaid et al. (1975) is one example of this. They invited 531 participants from two navy bases in the US to read from a set of eighteen passages of training manuals. The task was to answer questions about the manuals by filling in missing words (Cloze test). From the results, Kincaid et al. deduced the formula to predict the reading grade level for navy personnel. The advantage of this approach is that the collected data and resulting model represents the genuine user experience of text difficulty. The main challenge is obtaining sufficient data from the target user population for analysis.",
"cite_spans": [
{
"start": 186,
"end": 207,
"text": "Kincaid et al. (1975)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "The second research method, which has become prominent in NLP communities, uses large corpora of text samples that have been labelled by experts or publishers, to train machine learning models. One example is the research of Fran\u00e7ois and Fairon (2012) , who trained a machine learning algorithm with a text corpus labelled according to the levels of the Common European Framework of Reference for Languages (CEFR), to model the readability of French text for second language learners (Fran\u00e7ois and Fairon, 2012) . This approach allows modern classifiers to be trained on large data-sets of features. However, as has been confirmed by Vajjala and Lucic (2019) , expert or publisher labels of text are a poor substitute for genuine user experience, and even the choice of method of measuring the reading experience can lead to large differences in results. This echoes the results found elsewhere in usability research (Jeffries and Desurvire, 1992).",
"cite_spans": [
{
"start": 225,
"end": 251,
"text": "Fran\u00e7ois and Fairon (2012)",
"ref_id": "BIBREF4"
},
{
"start": 484,
"end": 511,
"text": "(Fran\u00e7ois and Fairon, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 634,
"end": 658,
"text": "Vajjala and Lucic (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "There was no Twitter corpus annotated with difficulty levels available, hence our research design consisted of a user study of tweet readability, specifically for people with English as an additional language. Our approach has the added advantage of reflecting the user experience of language learners matching the demographics of the participants. Participants completed a questionnaire that collected demographic data and reading difficulty judgements of a set of tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "Wilson VanVoorhis and Morgan (2007) recommends that with more than six predictors, to have at least ten participants per predictor. With 10-15 predictors from the survey (such as age groups, Twitter affinity, education levels) and text features from the tweets (such as the number of syllables, characters or Hashtags), we needed at least 150 participants per language. To account for contradicting, invalid or otherwise wrong responses that would need to be discarded from the corpus, we increased the target number of recruits to 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participant Recruitment",
"sec_num": "4.1"
},
{
"text": "We tried to recruit 200 native speakers from each of the six target languages of our study (Spanish, Portuguese, German, Dutch, Cantonese, and Mandarin) via the crowd-sourcing platform Figure Eight 1 . The actual questionnaire was hosted on Qualtrics 2 , a specialised website for conducting questionnaires. A participant would be forwarded to the Qualtrics questionnaire via a link once they accepted the survey questionnaire.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 197,
"text": "Figure Eight",
"ref_id": null
}
],
"eq_spans": [],
"section": "Participant Recruitment",
"sec_num": "4.1"
},
{
"text": "The Twitter corpus we used was merged from two Twitter corpora: one corpus from unpublished research by Klerke et al. (2016) ; and a larger corpus initially containing 6,000 randomly captured tweets using the Twitter Stream Application Developer Interface.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "Klerke et al. (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Corpus Collection",
"sec_num": "4.2"
},
{
"text": "The second corpus was captured in August 2018 directly from the Twitter stream, utilising the tweepy python library 3 , which allows searching Original: People Swea They KNOW E V E R Y T H I N G Bhou Me Bhuh They Dont Know NOTHING Bhuu my Name Corrected: People swear they know everything about me, but they don't know nothing but my name Figure 1 : An example of tweet simplification for a specified number of tweets that contain defined keywords. We used both functions to search for about 400 English language tweets for each first language, containing at least one word from a list of cognates of that language. This ensured that the tweet corpus contained a minimum number of cognates from each language. Due to specific post-collection steps that lowered the final corpus of tweets, more tweets were collected than needed.",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 347,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Twitter Corpus Collection",
"sec_num": "4.2"
},
{
"text": "Tweets were filtered for offensive content, using an automatic profanity check, followed by a manual process by the researchers to filter any remaining offensive tweets. Lastly, we filtered and deleted duplicates (such as retweets) leaving the entire corpus at 4700 tweets, commencing with 873 from the Klerke corpus. The first 4000 were used for the survey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Corpus Collection",
"sec_num": "4.2"
},
{
"text": "Due to platform limitations we broke the survey up into five surveys for each language: four of 1000 tweets and one of 100 tweets used for further validity checking and analysis. The 100-tweet survey consisted of tweets originally containing colloquialisms and/or social media features, such as emojis, which were manually selected from the pool of 4000. The tweets were stripped of emojis, hashtags, mentions, and repetitious content; spelling corrected, and the text adjusted in other ways to standardise it (for example, see Figure 1 ). It was used to test the questionnaire setup prior to releasing the main surveys.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 536,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Twitter Corpus Collection",
"sec_num": "4.2"
},
{
"text": "To avoid reading fatigue and to stay within the budget, each person made 20 judgements. This approach should have resulted in six judgements per question for 4000 tweets. That is, for each tweet, we would have at least two human judgements from each language family group. Using the Qualtrics randomisation function, the tweet questions were selected randomly from the pool of 4100 tweets to minimise ordering effects. To ensure an even distribution of judgements, each tweet was presented to at least one participant before any were shown a second time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questionnaire",
"sec_num": "4.3"
},
{
"text": "Participants were asked for their age, gender, country, education and foreign language knowl-edge, to assist in providing context for the ground truth collected, as well as to capture potential confounding variables known to influence vocabulary knowledge. We then presented the participants with 20 tweets for them to judge according to reading difficulty. Participants were to position a slider on a scale from 1 (very difficult) to 10 (very easy) representing their perception of the tweet's readability, as shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Questionnaire",
"sec_num": "4.3"
},
{
"text": "The last task for the participant was to answer a short translation task to confirm the participant does indeed speak their stated first language. The translation question was based on common proverbs in the participant's native language, which they needed to translate from English to their native language. This had the advantage that it was a relatively easy task, since proverbs are usually widely known, but allowed us to evaluate if the participant speaks the claimed language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questionnaire",
"sec_num": "4.3"
},
{
"text": "Using the IP range of specific countries, we restricted the survey job to specific language speakers in countries where they predominately or officially spoke that language. This way we had another layer to ensure we would only recruit the right target participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Questionnaire",
"sec_num": "4.3"
},
{
"text": "For the 100-tweet test surveys we lowered the number of tweets per job from twenty to ten. After seeing that target participation was reached for three languages (Spanish, Portuguese and German) we released all other jobs, which were kept open for about a month. Table 1 shows that Spanish, Portuguese and German participants were most active, while Chinese and Dutch-speaking countries had much lower participation. In the case of Dutch-targeted jobs, someone hacked the survey and exhausted the available budget, leaving us with few judgements for Dutch speakers.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Survey Execution and Outcome",
"sec_num": "4.4"
},
{
"text": "Data cleansing prior to analysis consisted of the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Restructuring and Cleansing",
"sec_num": "5"
},
{
"text": "\u2022 Transposing the data columns into a format suitable for analysis \u2022 Harmonising the contents of several columns such as country of origin or languages. \u2022 Matching and unifying the columns about educations levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Restructuring and Cleansing",
"sec_num": "5"
},
{
"text": "\u2022 Deleting rows with failed validation questions. Table 1 shows the final data set size for each language after the data cleansing steps were finished, . When visualising the judgement data as a histogram (see Figure 3 ) it shows an exponential distribution from very difficult to very easy perceived tweets. Fitting a line to the log of the number of judgements at each rating level has an R 2 of 0.97. Thus most tweets were evaluated as 10 (very easy to read and understand) by participants.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 210,
"end": 218,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data Restructuring and Cleansing",
"sec_num": "5"
},
{
"text": "When looking at the average ratings shown in Table 2, it can be seen that the more time someone spends with Twitter, the easier it is for participants to read tweets. Participants who used Twitter daily or weekly rated the tweets at 8.39 on average, while participants that never used Twitter averaged 7.99. Presumably frequent Twitter users are more accustomed to the linguistic conventions of Twitter and find it easier to understand tweets. This would partially explain why the majority of tweets are rated 10, as the majority of participants were heavy Twitter users. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter Use",
"sec_num": "6.1"
},
{
"text": "Formal school and language education had a strong influence on the judgements in the data (see Figure 4 ). Participants without any formal education didn't rate any tweets as 10, while the group of PhD graduates have the highest fraction of tweets rated 10. PhD graduates judged tweets as 9.07 on average, whereas participants without formal education rated their tweets on average at 7.25. During data cleansing, we mapped all reported English education levels to the CEFR standard, which has levels in increasing order of skill, A1, A2, B1, B2, C1 and C2 respectively. This mapping was possible for 4523 data points, which represents 30% of all judgements. Our data shows that the higher the English education, the more likely the tweets are judged higher. The average of A2 participants is 7.8 (30% of tweets given a 10), while the average of the C2 group is 8.74 (65% of judgements being 10) and average ratings increase monotonically between those two levels. The exception is A1, which had an average of 9.35. This could be due to a Dunning-Kruger effect, in which those with minimal knowledge of a subject have a disproportionately high opinion of their knowledge, a problem with the CEFR mapping at the A1 end, or randomly assigned tweets coincidentally being easier to read. It should also be noted that there were only 160 A1-based judgements, whereas all other language groups had at least 547. Those with A1 level English or less are likely to have found the user interface itself challenging, let alone the tweets they were allocated, which may have impacted their participation, resulting in a high proportion of \"false beginners\" in the cohort. We also captured any additional languages participants spoke besides their native language and English. Table 4 shows that the more languages a person spoke, the higher the average rating per tweet. The population of people speaking more than one additional language is relatively small, but so is the standard deviation. It is likely that ",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1764,
"end": 1771,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Education",
"sec_num": "6.2"
},
{
"text": "Twitter is a social media platform, where additional features are used to graphically express emotions and other items (emojis); or connect with other users (mentions), tweets (hashtags) or websites in and outside of Twitter (links). We look at each of these features below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "Emojis Emojis are ideograms used in messaging, including stylised facial expressions for displaying emotions, places, animals, food, and flags, among other objects. For the large data set, tweets with 0, 1, 2, 3 and >3 emojis respectively all had ratings between 8.17 and 8.51 with no obvious trend, and standard deviations from 1.84 to 1.98. The emojis did not seem to influence the judgements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "We also analysed a subset (40) of the modified tweets from the small test set from which emojis had been stripped. The average judgements of tweets with emojis removed (8.18, n = 246) was lower than that of the original tweets (M = 8.33, n = 93) . Due to the universal understanding of emojis across languages, they might increase readability, or their removal from tweets may take essential semantic content away. However, the difference in means is small, the variability high, and the tweets themselves were not randomly selected, so strong conclusions cannot be drawn at this stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "Hashtags Hashtags are used as metadata tags to reference themes or content and make them easily findable within and across social media platforms. The question is if they influence the readability of tweets, since they are often composed of joined and abbreviated words, for example, #muppetgovernment or #ImACeleb. In our corpus the number of hashtags present ranged from zero to twenty, but with very few containing more than 3 hashtags, and no obvious trend was observed as hashtags increased without binning. As with emojis, the subset of 29 modified tweets stripped of hashtags was judged less readable on average (8.07, n = 195) than the original ones (8.43, n = 61).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "A reverse trend was found in the larger data-set (see Table 5 ), with minimal overlap of confidence intervals, indicating confidence in the estimate of the population mean. However, differences in the mean are much smaller than those of the standard deviation, so hashtags are not strong predictors of readability.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "Mentions Mentions use the @-sign to refer to other users on Twitter, and like hashtags, are often used on social media, typically either at the beginning or end of tweets. Twitter does not count mentions in the character limit but only allows up to 50 mentions per tweet. We used the test subset (22) to compare tweets that are stripped of mentions against those with mentions. On average, the judgement with modified tweets is 8.19 (N = 156), while the ones with mentions lie at 7.89 (N = 72). These numbers indicate that mentions decrease readability. Interestingly, when broken down by Twitter use, daily Twitter use led to tweets with mentions being rated higher than those without. The opposite was true for those who used Twitter weekly or less frequently. This behaviour could mean that frequent users are better able to filter or appropriately process mentions when reading.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "Links Links are often used on Twitter to refer to other resources on the internet, such as news articles or videos. The links are often abbreviated to save space (for example, https://t.co/hle8l0AO1i). We found no evidence that links influence judgements of tweets, whether we used the number of links or their length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Twitter-specific Text Features",
"sec_num": "6.3"
},
{
"text": "We analysed the relationship between different text features and judgements, such as the number of characters per word, number of syllables per word, and sentence length. Most of them had negligible impact, except the total number of characters or words seems to show a trend on average that the more words, the lower the rating, but since they feed into readability measures, we would like to point out a few findings with traditional readability formulae.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readability Measures",
"sec_num": "6.4"
},
{
"text": "We compared judgements with Flesch scores, grouping scores into bins, which showed a peak at RE \u2208 [77, 81] , indicating most tweets were in the fairly easy to easy range. A slight upward trend was observed, indicating weak agreement between RE and judgements. Some Flesch scores were extremely negative, due to the sentence length and frequent use of words with high syllable counts. For example the following tweet is one sentence long, with 30 Syl-lables and eight words with a Flesch Score of -118.53.",
"cite_spans": [
{
"start": 98,
"end": 102,
"text": "[77,",
"ref_id": null
},
{
"start": 103,
"end": 106,
"text": "81]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Flesch Reading Ease",
"sec_num": null
},
{
"text": "#FollowMikeaveli #FollowMikeaveli #Fol-lowMikeaveli #FollowMikeaveli NO QUESTIONS JUST FOLLOW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Flesch Reading Ease",
"sec_num": null
},
{
"text": "While this tweet had a very negative score, meaning very difficult, its three ratings were 10, 10 and 7. We also tried the Flesch-Kincaid formula, which had similar trends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Flesch Reading Ease",
"sec_num": null
},
{
"text": "Dale-Chall The feature that is unique to Dale Chall's formula is the number of difficult words, being all words not in a list of 3000 easy words. Our data shows that on average, tweets with a higher number of difficult words are judged more difficult. The Dale-Chall formula however, shows the opposite trend. That is, the harder the tweet according to the Dale-Chall score, the easier it was judged by participants. Additionally, we did an analysis and exchanged cognates for \"easy words\" in the formula to see if this would have any effect. The trend reversed to the expected direction, but was again weak. A more nuanced approach is probably needed, with high frequency words from the list retained and combined with cognates. This will be explored in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Flesch Reading Ease",
"sec_num": null
},
{
"text": "Using both Pearson and Spearman correlations, we calculated correlation matrices between all columns and features. We used both formulations, as Pearson calculates the linear relationship between two variables, while Spearman evaluates the monotonic relationship, which is more appropriate for ordinal data or not entirely linear data. (See Table 6.) No single feature has a strong relationship to the judgements. The range is between negative and positive 9.6%, which is quite low. Correlation between native languages was also low, regardless of language similarity. This could mean that readability is different for each language. The highest positive Pearson correlation, and second highest Spearman, is the number of additional languages a participant speaks. Education and English level are also highly placed, confirming the previous finding that education or language skills have a stronger relationship with readability than the content itself. Twitter-specific features like the number of emojis and hashtags have little relationship with the judgements, the strongest being for mentions (Spearman -5.1%). In general, we find that demographic data are stronger predictors than text features.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 350,
"text": "Table 6.)",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Correlation",
"sec_num": "6.5"
},
{
"text": "Our correlation matrix showed that no feature has a strong correlation to the judgements. It made us question whether the results were trustworthy or whether the participants put in any effort. While we had a validation question for each participant to check if they spoke the native language they claim, we did not implement a similar question to measure sincerity in answering. However, we have some indication that participants answered thoughtfully. First, the slider for tweets was initially set to 1, representing very hard to read and understand, but most of the tweets were rated 10. It means the participants moved the bar to provide their response. Second, the test subset (17) we manipulated to more straightforward language (see, for example, Figure 1 ), had an average judgement of 8.35 (n = 99) compared to the original average of 7.55 (n = 56), meaning simplified ones were judged as easier to read. These results lower our doubts about the sincerity of the answers by the participants.",
"cite_spans": [],
"ref_spans": [
{
"start": 755,
"end": 763,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Confidence in Results",
"sec_num": "6.6"
},
{
"text": "The features we extracted have a low correlation to the judgements. However, these are not the only features that can be extracted. We saw that uncommon or incorrect words have an effect on readabil-ity, therefore, constructing a measure of the severity of incorrectness might show stronger correlation than we currently have. Other possible features could be a percentile of non-lexical words, presence of particular grammatical terms or frequency of named entities to name a few. From the extracted features, emojis and mentions, while not showing high correlation themselves, may influence judgements when isolated and compared to tweets stripped of them. We are also yet to explore the use of features such as perplexity. Our machine learning results using further features will be reported elsewhere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "A difficulty with the current data set is that the majority of judgements are at the maximum of the scale, indicating a mismatch between participants and text. A new experiment that selects more homogeneous participant groups based on confounding variables such as age, Twitter usage, English levels and education may be more successful. Obtaining more judgements per tweet would allow more conclusions about user perceptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "We started this research by asking what influences the readability of English tweets for foreign language speakers?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We designed and executed a survey on a crowdsourcing platform where 865 participants made 10-20 readability judgements from a pool of 4100 tweets. It did not produce the results we expected, as all features showed a low correlation (\u2264 9.6%) to the judgements. These features included traditional readability formulae and their components, which in other studies correlate well with user judgements (for example, Uitdenbogerd (2005) achieved 9-85% correlation for traditional readability features and formulae). This study revealed that traditional readability formulae do not work well on tweets. Another observation we made is that some demographic data had stronger predictive power than the text features themselves. For example, English skill level, number of languages known besides English, and the native language showed the highest correlation out of the available features.",
"cite_spans": [
{
"start": 412,
"end": 431,
"text": "Uitdenbogerd (2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "As for what makes it hard or easy to read tweets, we do not have a definitive answer, but our research points in the following directions. Slang, wrongly written and uncommon words seem to lower the readability. The number of words or characters and readability formulae have limited predictive value on the readability. From the Twitter-related features, emojis may improve readability, while using mentions and hashtags diminish it for those less familiar with tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "All these insights leave us to further investigate in future studies how strongly the observed effects influence the readability of tweets, and thereby build a useful model for filtering Twitter content for language learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://www.figure-eight.com 2 https://www.qualtrics.com/au 3 https://www.tweepy.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A language modeling approach to predicting reading difficulty",
"authors": [
{
"first": "Kevyn",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
},
{
"first": "James",
"middle": [
"P"
],
"last": "Callan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevyn Collins-Thompson and James P Callan. 2004. A language modeling approach to predicting read- ing difficulty. In Proceedings of the Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 193-200. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A formula for predicting readability: Instructions",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Jeanne",
"middle": [
"S"
],
"last": "Chall",
"suffix": ""
}
],
"year": 1948,
"venue": "Educational Research Bulletin",
"volume": "27",
"issue": "2",
"pages": "37--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Dale and Jeanne S. Chall. 1948. A formula for predicting readability: Instructions. Educational Research Bulletin, 27(2):37-54.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The readability of tweets and their geographic correlation with education",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "James",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Davenport",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Deline",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. A. Davenport and Robert DeLine. 2014. The readability of tweets and their geographic correla- tion with education. Computing Research Reposi- tory, abs/1401.6058.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A new readability yardstick",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Flesch",
"suffix": ""
}
],
"year": 1948,
"venue": "Journal of Applied Psychology",
"volume": "32",
"issue": "3",
"pages": "221--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):221-233.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An AI readability formula for French as a foreign language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
},
{
"first": "Cedrick",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "466--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Fran\u00e7ois and Cedrick Fairon. 2012. An AI readability formula for French as a foreign language. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 466-477. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The development and validation of a formula for measuring single-sentence test item readability",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Homan",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Linder",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Educational Measuremen",
"volume": "31",
"issue": "4",
"pages": "349--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Homan, Margaret Hewitt, and Jean Linder. 1994. The development and validation of a formula for measuring single-sentence test item readability. Journal of Educational Measuremen, 31(4):349- 358.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Usability testing vs. heuristic evaluation: was there a contest?",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jeffries",
"suffix": ""
},
{
"first": "Heather",
"middle": [],
"last": "Desurvire",
"suffix": ""
}
],
"year": 1992,
"venue": "ACM SIGCHI Bulletin",
"volume": "24",
"issue": "4",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jeffries and Heather Desurvire. 1992. Usability testing vs. heuristic evaluation: was there a contest? ACM SIGCHI Bulletin, 24(4):39-41.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kincaid",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"P"
],
"last": "Fishburne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"S"
],
"last": "Rogers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chissom",
"suffix": ""
}
],
"year": 1975,
"venue": "Naval Technical Training Command",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Peter Kincaid, Robert P. Fishburne Jr., Richard L.Rogers, and Brad S. Chissom. 1975. Derivation of New Readability Formulas (Automated Readabil- ity Index, Fog Count and Flesch Reading Ease For- mula) for Navy Enlisted Personnel. Naval Technical Training Command.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Twitter corpus from eye gaze study. Twitter data-set from an unpublished paper",
"authors": [
{
"first": "Sigrid",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"L"
],
"last": "Uitdenbogerd",
"suffix": ""
},
{
"first": "Falk",
"middle": [],
"last": "Scholer",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigrid Klerke, Alexandra L. Uitdenbogerd, Falk Sc- holer, and Tim Baldwin. 2016. Twitter corpus from eye gaze study. Twitter data-set from an unpublished paper.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Developing and evaluating a readability measure for microblogging communication",
"authors": [
{
"first": "Marten",
"middle": [],
"last": "Risius",
"suffix": ""
},
{
"first": "Theresia",
"middle": [],
"last": "Pape",
"suffix": ""
}
],
"year": 2016,
"venue": "E-Life: Web-Enabled Convergence of Commerce, Work, and Social Life",
"volume": "",
"issue": "",
"pages": "217--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marten Risius and Theresia Pape. 2016. Developing and evaluating a readability measure for microblog- ging communication. In E-Life: Web-Enabled Con- vergence of Commerce, Work, and Social Life, pages 217-221. Springer International Publishing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The case for readability of crisis communications in social media",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Temnikova",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Vieweg",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion",
"volume": "",
"issue": "",
"pages": "1245--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Temnikova, Sarah Vieweg, and Carlos Castillo. 2015. The case for readability of crisis communica- tions in social media. In Proceedings of the 24th In- ternational Conference on World Wide Web, WWW '15 Companion, pages 1245-1250. Association for Computing Machinery.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Readability of French as a foreign language and its uses",
"authors": [
{
"first": "Alexandra",
"middle": [
"L"
],
"last": "Uitdenbogerd",
"suffix": ""
}
],
"year": 2005,
"venue": "ADCS 2005: Proceedings of the Tenth Australasian Document Computing Symposium",
"volume": "",
"issue": "",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra L. Uitdenbogerd. 2005. Readability of French as a foreign language and its uses. In ADCS 2005: Proceedings of the Tenth Australasian Docu- ment Computing Symposium, pages 19-25.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On understanding the relation between expert annotations of text readability and target reader comprehension",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
},
{
"first": "Ivana",
"middle": [],
"last": "Lucic",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "349--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala and Ivana Lucic. 2019. On under- standing the relation between expert annotations of text readability and target reader comprehension. In Proceedings of the Fourteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 349-359, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Understanding power and rules of thumb for determining sample sizes",
"authors": [
{
"first": "Carmen",
"middle": [
"R Wilson"
],
"last": "Vanvoorhis",
"suffix": ""
},
{
"first": "Betsy",
"middle": [
"L"
],
"last": "Morgan",
"suffix": ""
}
],
"year": 2007,
"venue": "Tutorials in Quantitative Methods for Psychology",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carmen R. Wilson VanVoorhis and Betsy L. Morgan. 2007. Understanding power and rules of thumb for determining sample sizes. Tutorials in Quantitative Methods for Psychology, pages 43-50.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example tweet question including slider."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Histogram of judgements"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Chart of judgements by education level broader language knowledge improves the reading capabilities of unusual text such as tweets."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Chart of average judgements by mention broken down by Twitter use. Error bars are 95% confidence intervals."
},
"TABREF1": {
"html": null,
"text": "Number of data points after cleaning",
"content": "<table><tr><td>6 Descriptive Statistics</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "Mean and standard deviation of tweet readability judgements across Twitter usage groups",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"text": "Average judgement by additional languages spoken",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF9": {
"html": null,
"text": "Mean and standard deviation of judgements according to number of hashtags",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF11": {
"html": null,
"text": "Pearson and Spearman rank correlation between judgements and features.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}