|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:12:47.029538Z" |
|
}, |
|
"title": "Successes and failures of Menzerath's law at the syntactic level", |
|
"authors": [ |
|
{ |
|
"first": "Aleksandrs", |
|
"middle": [], |
|
"last": "Berdicevskis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Gothenburg", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Menzerath's law is a quantitative generalization which predicts a negative correlation between the mean size of parts of a unit and the number of parts in the unit. In this paper, I use Universal Dependencies to perform a cross-linguistic test of Menzerath's law at two syntactic levels: whether the number of clauses in a sentence negatively correlates with mean clause length in this sentence and whether the number of words in a clause negatively correlates with mean word length in this clause. Menzerath's largely holds at the former level and largely does not at the latter. I discuss other interesting patterns observed in the data and propose some tentative partial explanations.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Menzerath's law is a quantitative generalization which predicts a negative correlation between the mean size of parts of a unit and the number of parts in the unit. In this paper, I use Universal Dependencies to perform a cross-linguistic test of Menzerath's law at two syntactic levels: whether the number of clauses in a sentence negatively correlates with mean clause length in this sentence and whether the number of words in a clause negatively correlates with mean word length in this clause. Menzerath's largely holds at the former level and largely does not at the latter. I discuss other interesting patterns observed in the data and propose some tentative partial explanations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Quantitative laws such as, for instance, Zipfian rank-frequency law (Piantadosi, 2014) or abbreviation law (Bentz and Ferrer-i-Cancho, 2016) are perhaps ones of the most universal generalizations that can be made about language. Universal here can be understood as both 'true for all / most languages' and 'true for various domains / levels of language'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 86, |
|
"text": "(Piantadosi, 2014)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 140, |
|
"text": "(Bentz and Ferrer-i-Cancho, 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Another oft-cited generalization is Menzerath's law (Altmann, 1980; Stave et al., 2021) , also called Menzerath-Altmann law. Menzerath's law predicts a negative correlation between the mean size of parts of a unit and the number of parts in the unit. Thus, the more sub-units (constituents) a linguistic unit (carrier unit, or construct) has, the shorter these units are expected to be on average. For instance, the more clauses a sentence contains, the shorter the mean length of these clauses (in words) is expected to be (Altmann, 1980) . Menzerath's law has been tested for various types of units in various languages (and also beyond language) and mostly (though not universally) found to be true (see an overview in Section 2). Most studies, however, used relatively small corpora (or even dictionaries), often of just one language, often not open-access, often shallowly annotated for the specific study. I use the Universal Dependencies (UD) collection to perform the largest-scale (to date) study of Menzerath's law at two syntactic levels: sentence-clause-word and clause-word-grapheme (see Section 3). I demonstrate that Menzerath's law works quite well at the former level, but not at the latter (see Sections 4, 5 and 6).", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 67, |
|
"text": "(Altmann, 1980;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 87, |
|
"text": "Stave et al., 2021)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 539, |
|
"text": "(Altmann, 1980)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is currently no consensus on why Menzerath's law emerges (or why it does not), and thus I cannot fully explain the observed results. In Section 7, however, I discuss which insights can be gleaned from the UD analysis and which hypotheses deserve further testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Any particular application of Menzerath's law has to be described at three levels: the length of a unit (for instance, clause), measured in sub-units (for instance, words), is supposed to negatively correlate with the mean length of sub-units, measured either in sub-sub-units (for instance, phonemes or graphemes) or using a suitable continuous measure (for instance, seconds). In this paper, two triples will be analyzed: sentence-clause-word and clause-word-grapheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Defining Menzerath's law", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Menzerath's law has been shown to hold at different levels in different languages, but it is sometimes overlooked that there are at least two ways to interpret the claim Menzerath's law holds. One interpretation (which will be used in this paper) is 'the mean size of a sub-unit and the number of sub-units in the unit are negatively correlated' (Stave et al., 2021) . Another interpretation is 'the relation between the mean size of a sub-unit (y) and the number of sub-units (x) can be approximated by a specific function'. The function is typically assumed to be y(x) = ax b e \u2212cx (Altmann, 1980) , often simplified to y(x) = ax b , though other variants have also been proposed (Mili\u010dka, 2014) . Sometimes the first interpretation is labelled as Menzerath's law, while the second one as Menzerath-Altmann's law (Ferreri-Cancho et al., 2014) . Both interpretations rest on the assumption that number of sub-units and the mean size of sub-units are related (Ma\u010dutek et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 366, |
|
"text": "(Stave et al., 2021)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 584, |
|
"end": 599, |
|
"text": "(Altmann, 1980)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 697, |
|
"text": "(Mili\u010dka, 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 815, |
|
"end": 844, |
|
"text": "(Ferreri-Cancho et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 959, |
|
"end": 981, |
|
"text": "(Ma\u010dutek et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Defining Menzerath's law", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While it is possible that there is a negative correlation and the relation can be approximated by Menzerath-Altmann's function (a power law with an exponential cutoff), it may also be that the latter is true, while the former is not. In Chinese, for instance, Menzerath-Altmann's function works well for sentence-clause-word (R 2 = 0.85) and clause-word-component (R 2 = 0.77; component is a constructing unit of a logogram) (Chen and Liu, 2019) . Visual inspection of Chen and Liu's data, however, shows that the relation is clearly non-monotonic (down-up), and measuring Spearman's correlation coefficient shows there is no negative correlation for clause-word-component (r = 0.42, p = 0.016), while for sentence-clause-word the results are somewhat ambivalent (r = \u22120.51, p = 0.052). In a similar vein, Buk and Rovenchak (2008) report fitting Menzerath-Altmann's function for sentenceclause-word in Ukrainian, but the visualization of the data shows a non-monotonic (up-down) pattern, and Spearman's coefficient (calculated only for those sentence lengths which Buk and Rovenchak consider \"reliable\", that is, those for which at least 20 datapoints are available) does not show a negative correlation (r = \u22120.13, p = 0.748).", |
|
"cite_spans": [ |
|
{ |
|
"start": 425, |
|
"end": 445, |
|
"text": "(Chen and Liu, 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 498, |
|
"text": "Chen and Liu's data, however,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 806, |
|
"end": 830, |
|
"text": "Buk and Rovenchak (2008)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Defining Menzerath's law", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It can be argued that the latter, fit-a-model approach offers a more exact description of the reality. Following this logic, many studies (Cramer, 2005; Kelih, 2010; Baixeries et al., 2013; Mili\u010dka, 2014) focus on finding the most appropriate formula and fine-tuning the parameters. On the other hand, if the model is complex enough, virtually any curve can be approximated reasonably well. To avoid overfitting, a clear theoretical explanation of the model is desirable. The existing explanations of Menzerath's law (see Section 2.3) mostly address the negative correlation, though attempts at explaining the Menzerath-Altmann's function and even interpreting its parameters have also been made (K\u00f6hler, 1984) . I am not, however, aware of any explanation that would have successfully addressed the non-monotonic patterns observed above. Thus, in this study, Menzerath's law is understood as the negative correlation, without an attempt to describe the exact mathematical relation. The purpose of the study is to find out whether the law holds at the syntactic level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 152, |
|
"text": "(Cramer, 2005;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 165, |
|
"text": "Kelih, 2010;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 189, |
|
"text": "Baixeries et al., 2013;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 204, |
|
"text": "Mili\u010dka, 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 696, |
|
"end": 710, |
|
"text": "(K\u00f6hler, 1984)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Defining Menzerath's law", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Altmann (1980, p. 129) predicts that Menzerath's law will hold for sentence-clause-word (otherwise the sentence presumably loses clarity). He also considers sentence-word-subword unit (word length can be measured in different ways: phonemes, graphemes, syllables, morphemes), but does not make a specific prediction, noting that \"a monotonic decrease of word length can hardly be expected\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 22, |
|
"text": "(1980, p. 129)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing evidence", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The first hypothesis (sentence-clause-word) has been tested before. Apart from the references mentioned in Section 2.1, Teupenhayn and Altmann (1984) report that Menzerath's law holds for German. Hou et al. (2017) find that in Chinese, it holds in formal written texts, but not in other registers. Xu and He (2020) , however, demonstrate that in English, it holds for different registers. Roukk (2011) , analyzing parallel texts in Russian and German and Russian and English, reports poor fitting results. Her data are too small for a correlation test to yield reliable results, but from a visual inspection it is obvious that there is no clear downward trend.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 149, |
|
"text": "Teupenhayn and Altmann (1984)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 213, |
|
"text": "Hou et al. (2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 314, |
|
"text": "Xu and He (2020)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 401, |
|
"text": "Roukk (2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing evidence", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The second hypothesis (clause-word-subword unit) has received much less attention, but see the aforementioned study by Buk and Rovenchak (2008) and a relevant discussion by Altmann (1983) Ma\u010dutek et al. (2017) look at clause-phrase-word in Czech, where phrase is defined as a subtree consisting of a node which is directly dependent of the clause predicate and all nodes that are (directly or indirectly) dependent on this node. They report good fitting results, and applying Spearman's test to their data (following their approach, only to those clause lengths that have more than 10 datapoints) yields a strong negative correlation (r = \u22120.92, p = 0.001).", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 143, |
|
"text": "Buk and Rovenchak (2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 187, |
|
"text": "Altmann (1983)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing evidence", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that all these studies have at least one (often more) of the following limitations: they were performed for one language only; small corpora were used; those corpora had shallow annotation (for instance, number of clauses estimated by simply counting the number of finite verbs), often created specifically for the study; the data are not openly available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing evidence", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In fact, the only large cross-linguistic study on Menzerath's law that I am aware of was performed by Stave et al. (2021) , but it deals with the word-morpheme-grapheme level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 121, |
|
"text": "Stave et al. (2021)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing evidence", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "There is no ubiquitously accepted explanation of why Menzerath's law is expected to hold. It has been argued to be mathematically trivial, but Ferrer-i-Cancho et al. (2014) provide evidence against this view.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "It is typically assumed that Menzerath's law, similarly to Zipf's abbreviation law, emerges from efficiency pressures, but what exactly those pressures are is not fully understood. K\u00f6hler (1984) hypothesizes that the sub-units and the \"structural information\" about the connections between them must be stored at the same \"register\" in the brain (Vulanovic and K\u00f6hler, 2005) . As the number of sub-units increases, so does the amount of structural information, and the only way to free up the necessary storage space is to use shorter sub-units. Mili\u010dka (2014) develops this hypothesis further, but in both accounts the notion of structural information remains very vague. Gustison et al. (2016) claim that Menzerath's law is caused by pressure for compression. They propose a unified formal mathematical framework for the explanation of Menzerath's law and Zipf's law of abbreviation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 194, |
|
"text": "K\u00f6hler (1984)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 374, |
|
"text": "(Vulanovic and K\u00f6hler, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 560, |
|
"text": "Mili\u010dka (2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 673, |
|
"end": 695, |
|
"text": "Gustison et al. (2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "It can actually be asked whether Menzerath's law cannot (at least in some cases) be reduced to Zipf's law of abbreviation. Consider, for instance, the word-morpheme-phoneme level. The more morphemes in a word, the higher the chance that many of them will be affixes rather than roots, have higher frequency and be on average shorter. Stave et al. (2021) , however, show that for word-morpheme-grapheme, both Zipf's and Menzerath's law are at work, and removing one of them results in a poorer fit of a model. Coming back to syntax, the following level-specific explanation can be proposed for the sentenceclause-word level. Clauses often share certain elements. Open clausal component (raising and control structures, xcomp in UD), for instance, by definition does not have an internal subject, but the main clause may contain an element that functions as an (external) subject (cf. Mary wants to buy a book, where Mary is the subject of wants, but also the (external) subject of buy). Coordinated clauses can have shared arguments (Mary is singing and dancing, where Mary is the subject of both verbs), while repeated verbs can be omitted (gapping: I like tea, and you coffee). It can be expected that the number of clauses may correlate with the number of shared elements, thus reducing the average clause length. In a similar vein, clauses can act as elements of another clause. The length of the main clause per se can decrease, if dependent clauses fulfill the roles that would otherwise have been played by non-clausal dependents (see a test of this hypothesis in Section 4.2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 353, |
|
"text": "Stave et al. (2021)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The present study is thus exploratory rather then confirmatory. It seeks to test whether Menzerath's law holds for sentence-clause-word and clause-word-grapheme across languages, and whether the crosslinguistic data lend support to any tentative explanations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanations", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "I use corpus data from Universal Dependencies (UD) 2.8.1 (Zeman et al., 2021) . All treebanks that do not have surface forms or that have less than 10,000 tokens are excluded from consideration. Naija-NSC treebank is also excluded, since it has an unusually high proportion (29%) of dep relation (which should normally be avoided).", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 77, |
|
"text": "(Zeman et al., 2021)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "If a language has more than one treebank that fit the requirements, they are all concatenated. The final dataset has 78 languages from 15 families (Indo-European, Afro-Asiatic, Mande, Basque, Mongolic, Sino-Tibetan, Uralic, Austronesian, Turkic, Mayan, Korean, Dravidian, Tai-Kadai, Austro-Asiatic, Niger-Congo). Note that how different genres are represented varies strongly across languages and treebanks. Genre is likely to affect the distribution of lengths of all units (sentences, clauses, words) and thus may potentially be a relevant factor. Nonetheless, since many treebanks do not have explicit detailed metadata about which sentence belongs to which genre, I do not attempt to control for genre. The key notions (\"sentence\", \"word\", \"clause\") are operationalized as follows. \"Sentences\" are equivalent to UD sentences. Note, however, that sentence segmentation may not be a trivial task, for instance, for oral speech, ancient languages and social media, and thus even at the sentence level some inconsistencies across treebanks are possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\"Words\" are equivalent to UD tokens with minor exceptions. Punctuation marks (PUNCT) are excluded. Symbols (SYM) and unclassifiable tokens (X) must also be excluded (these labels can be used, for instance, for very long tokens like URLs, which can skew the results). However, unlike PUNCTs, SYMs and Xs can potentially have their own dependents and thus be important elements of the syntactic structure: it is not clear whether in such cases it is legitimate to remove them, but leave the rest of the sentence. For this reason, all sentences containing at least one SYM or X are excluded completely. Empty nodes (nodes with IDs like 1.1) are excluded, since they do not exist (should not inflate clause length) and do not have their own length. For multiword tokens, the token denoted by the range ID (e.g. 1-3), i.e. the surface token, is included in the analysis, the corresponding syntactic tokens (1, 2, 3) are excluded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\"Clauses\" are most problematic, since there is no straightforward way to demarcate clauses in UD (as in most dependency grammars). Here, a clause consists of a node which has an incoming \"clausal\" relation (clausal root) and all descendants (both direct and indirect: children, grandchildren and so on) of the clausal root that do not have an incoming \"clausal\" relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Clausal relations are root, csubj (clausal subject), ccomp (clausal complement), advcl (adverbial clause modifier), acl (relative clause modifier), parataxis, some cases of xcomp (open clausal complement), some cases of conj (coordination). Relation subtypes are not distinguished (i.e. everything after a colon is ignored: csubj:pass is treated as csubj, acl:relcl as acl). xcomp is considered a clausal relation if its child is a verb, i.e. great in You look great does not start a new clause, while work in I started to work there yesterday does. conj is considered a clausal relation if either the parent or the child is a verb. The idea is to distinguish between clausal and non-clausal coordination, but the problem is that in cases of ellipsis, the head of a clause is not necessarily a verb. The rule \"at least one of the conjuncts has to be a verb\" covers some of such cases, but not the ones like Jack a teacher, Jill a doctor, which are possible and frequent in some languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Overall, the operationalization is necessarily crude and will certainly to some extent err on both sides: make false clause splits and fail to split when it should. Apart from the coordination problem, the following issues can be mentioned. Words (e.g. participles) can have verb as the POS label, but not actually behave like verbs. The parataxis relation can be argued to not always introduce a new clause. dep, reparation and discourse may deserve a special treatment, the former two should possibly be excluded, while the latter can be argued to introduce a new clause, at least in some cases. All these questions cannot be properly addressed without a thorough language-and treebank-specific linguistically-informed manual analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Removing punctuation marks and empty tokens may in some cases lead to clauses or sentences consisting of zero words (e.g. if a clause/sentence consisted of an exclamation mark). All sentences where at least one clause has zero length are excluded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that the language sample is not balanced (either genetically, areally or typologically). Excluding overrepresented language groups (mostly Indo-European) would lead to an undesirable data loss, since these languages also tend to have larger treebanks, which can be assumed to yield more robust and reliable results. To this end, I avoid averaging across languages (with the exception of clause type analysis in Section 4.2). Readers are encouraged to keep in mind that certain biases can emerge from the sample properties. The code that was used to run the analyses and its detailed output are available at https:// github.com/AleksandrsBerdicevskis/menzerath.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Results: Sentence-clause-word", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials and methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For every sentence in every language, I measured (according to operationalizations outlined in Section 3) how many clauses it contains and how many words the clauses in this sentence on average contain. I visually inspected the relation between the two variables for all languages. To prevent the results being skewed by outliers (usually a very small number of very long sentences), only those sentence lengths which had at least 50 datapoints were included. Note that languages vary greatly in terms of how many sentence lengths are represented in the data. After the 50-sentence filter is applied, Kiche, for instance, only has sentences which contain one or two clauses, while Icelandic covers the whole range from one to fifteen clauses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Most typical patterns are represented by examples in Figure 1 . In the vast majority of languages, the average clause length decreases monotonically according to what seems to be a power law (see, for instance, Wolof in Figure 1a ). Sometimes, minor deviations from monotonicity are observed, often at large sentence length values (see Hebrew in Figure 1b) . Nonetheless, even with the deviations most languages still exhibit a clear general downward trend. Those few for which the downward trend is not observed include, for instance, Latin (Figure 1c ) and Scottish Gaelic (Figure 1d ). In Latin, there is a decrease, but only in the beginning, while in Scottish Gaelic, there is rather a very small upward trend.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 220, |
|
"end": 229, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 356, |
|
"text": "Figure 1b)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 552, |
|
"text": "(Figure 1c", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 585, |
|
"text": "(Figure 1d", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "General results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To do a formal test, I calculated Spearman's correlation coefficients between sentence length and clause length for all languages. They are reported in Table 3 in Appendix A (together with corresponding p-values) and summarized in Table 1 . When summarizing the results, p-values, however, should be treated with caution, since they strongly depend on sample size, that is, how many different sentence lengths are represented in the data. For languages with small range of lengths p-values will never be small, even if perfect correlation is observed. Komi-Zyrian, for instance, demonstrates a perfect negative correlation, but only four different sentence lengths are represented (1-4 clauses), and the p-value is a theoretical minimum of 0.083. For this reason, the following criteria were applied. If the absolute value of the correlation coefficient was equal to or larger than 0.70, the language was labelled as demonstrating as either negative or positive correlation, regardless of the p-value. The same was done if the absolute value of the coefficient was larger than or equal to 0.30 and smaller than 0.70 and the p-value was smaller than or equal to 0.05. In other cases the correlation was assumed to be absent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 159, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 238, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "General results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Under this interpretation, there are no cases of (anti-Menzerathian) positive correlation and 10 cases where the correlation could not be detected. It is difficult to tell whether it happens because it is truly absent or whether the sample is too small, and thus not clear whether the datapoints should be interpreted as 'Menzerath's law does not hold' or 'Unknown whether Menzerath's law holds'. If we concentrate on languages with ranges 1-6 or larger (on the assumption that they yield more reliable samples), then seven languages out of 52 do not clearly conform to Menzerath's law: Latin, Scottish Gaelic, Icelandic, Old East Slavic, Old French, Finnish and Turkish. The other three (\"small\") languages that do not have a correlation are Manx, Breton and Sanskrit.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As a preliminary test of the hypothesis that Menzerath's law at the sentence-clause-word level can be explained by the fact that certain words are syntactically shared between clauses, I performed the following analyses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "First, I compared average lengths of various types of clauses. For every language, I extract all sentences that have exactly two clauses, one main (matrix) clause, one dependent (though in cases of coordination, the main-dependent contraposition is actually somewhat artificial). For dependent clauses, \"type\" is equivalent to the incoming relation of the clause root (that is, ccomp, conj etc.). For every clause type, its average length within language is calculated (types with less than 50 datapoints within a language were excluded). Note that if sentences which contain more than two clauses were included, the comparison would have to become much more nuanced. Main clauses could have different number of dependent clauses, while dependent clauses could have double roles: act as main clauses for their own dependents. These factors can potentially affect length distribution, and would have to be taken into account. For simplicity, the analysis is limited to two-clause sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Clause lengths vary greatly across languages and treebanks. To correct for that and focus on the comparison across clause types within language, I normalized the average length of every type by average length of a simple sentence within the same language (that is, a sentence consisting of one and only one clause). These normalized lengths are then averaged across languages. The results are presented in Table 2 . Keep in mind that such averaging may yield heavily skewed results, since the language sample is not balanced (and interquartile ranges suggest large variation for all types). Note also that not every clause type is represented in every language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 413, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "xcomp, as expected, tends to be short. The same is true for parataxis, probably because this relation is often used for short interjected clauses like parenthetical constructions (for example, for example or of course), tag questions etc. Interestingly, the longest type is not main, but ccomp.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As mentioned in Section 2.3, dependent clauses may perform functions of non-clausal dependents. csubj functions as a subject and is a clausal equivalent of nsubj, advcl is a clausal equivalent of advmod, ccomp and xcomp can be said to be clausal equivalents of obj, though note that this last correspondence is less clear. Consider now a main clause which has one of these clausal dependents, for instance, csubj. According to the operationalization used in this paper, the words contained by the dependent clause are not included into the main clause. In other words, there is a subject, but it is \"outside\" of a clause. If, however, the dependent was non-clausal (nsubj), it (and all its dependents) would have been inside the main clause and contributed to its length. It is no surprise then that main clauses are shorter than simple sentences. It is, however, interesting whether this is the only reason. To test that, I measure the decrease in length caused by having a clausal dependent (e.g. csubj) is approximately equal to the average length of a corresponding non-clausal dependent (nsubj).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "I label every main clause in the two-clause-sentence sample described above by the type of dependent clause it has (xcomp and ccomp are merged together and labelled comp). The mean length of every \"main-clause type\" (normalized by the length of the simple sentence) across languages is reported in Table 2 in the column \"main length\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 305, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Using the simple-sentence sample, I calculated the mean length (in words) of nsubj, advmod and obj. The column \"diff\" in Table 2 shows the normalized difference between the simple sentence length and the sum of two lengths: that of main-clause type (e.g. csubj) and the corresponding non-clausal dependent (nsubj). As all other numbers in the table, the difference is normalized by the simplesentence length. If the hypothesis is correct, the difference should be close to zero, and indeed it is for comp and advcl (though note large interquartile ranges), but not for csubj. Table 2 : Mean lengths across languages. The \"main length\" column should be read as 'mean length of a main clause having a dependent clause of the specified type'. \"Diff\" is a difference between the simple sentence length and the sum of two lengths: that of main-clause type (\"main length\") and the corresponding non-clausal dependent (e.g. textttnsubj for textttcsubj). All numbers are normalized by the mean length of a simple sentence in the same language. IQR = interquartile range.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 128, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 583, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "No other clear patterns are observed. There does not seem to be any strong correlation between the length of the dependent clause of a certain type and corresponding main clause type. Interestingly, main clauses that have an acl clause are slightly longer than simple sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is sharing elements across clauses the answer?", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Exactly as with the sentence-clause-word analysis, I measured for every clause in every language how many words it contains and how many graphemes the words in this clause on average contain. I visually inspected the relation between the two variables for all languages. Again, only those lengths that had at least 50 datapoints were included.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results: Clause-word-grapheme", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Most typical patterns are represented by examples in Figure 2 . Overall, the results were more variable than for the sentence-clause-word analysis, where one dominant pattern was observed. For clauseword-grapheme, 29 languages also exhibit a downward trend. Most often, it is L-shaped: a very steep decrease in the beginning, followed by a nearly flat line (see, for instance, Bambara in Figure 2a) . In a few cases, the decrease is more gradually spread over the curve (Indonesian in Figure 2b ). For 42 languages, an U-curve is observed, first a decrease and then a comparable increase (Latvian in Figure 2c ). For four languages, the differences are so small that the pattern is best described as a flat line (see, for instance, Uyghur in Figure 2d ). For four languages, there is an upward trend (Kazakh in Figure 2e ). Finally, Persian (Figure 2f ) exhibits a unique pattern: an inverted U-curve, an increase followed by a decrease.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 398, |
|
"text": "Figure 2a)", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 494, |
|
"text": "Figure 2b", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 609, |
|
"text": "Figure 2c", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 751, |
|
"text": "Figure 2d", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 811, |
|
"end": 820, |
|
"text": "Figure 2e", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 851, |
|
"text": "(Figure 2f", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results: Clause-word-grapheme", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Spearman's correlation coefficients were calculated in the same way as for the sentence-clause-word level and summarized in Table 1 . As can be seen from the summary, the adherence to Menzerath's law at the clause-word-grapheme level is much weaker. Note, however, that correlation coefficients are not really informative for the languages with clear non-monotonic patterns. Since I can propose no explicit hypothesis to explain the observed data, an inferential test is not appropriate: it is unclear what it can infer. Technically, some kind of non-linear regression model could of course be fitted to the data, but in the absence of a specific theory to test, the model would end up having many researcher degrees of freedom (Tong, 2019; Simmons et al., 2011) , which is undesirable. I limit myself to labelling the observed curves as DOWN, UP, DOWN-UP, FLAT or UP-DOWN. The formalized procedure to determine the shape of the curve is described in Appendix B. The results are summarized above and reported in detail in Table 3 in Appendix A.", |
|
"cite_spans": [ |
|
{ |
|
"start": 728, |
|
"end": 740, |
|
"text": "(Tong, 2019;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 762, |
|
"text": "Simmons et al., 2011)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 131, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1022, |
|
"end": 1029, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results: Clause-word-grapheme", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In order to test whether the results are robust, I reran the analyses with various thresholds instead of 50 datapoints per sentence/clause length (0, 20, 100). There were no qualitative changes of the overall Since clause can be argued to be a problematic and / or imperfectly operationalized construct, I ran an analysis for the sentence-word-grapheme level (ignoring the clause level). The results resemble the ones for clause-word-grapheme (see Table 1 ). Ma\u010dutek et al. (2017) reported that Menzerath's law holds for clause-phrase-word in Czech (see Section 2.2). It can be questioned whether their operationalization of phrase is theoretically adequate (in general, phrase is a less theory-neutral notion than clause), but I used it to run the analyses for clausephrase-word and sentence-clause-phrase. I reproduced their findings for Czech, but overall, compared to sentence-clause-word, the adherence to Menzerath's law was slightly lower for clause-phrase-word, much lower for sentence-clause-phrase and even lower for phrase-word-grapheme (see Table 1 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 480, |
|
"text": "Ma\u010dutek et al. (2017)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 448, |
|
"end": 455, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1053, |
|
"end": 1060, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Robustness analyses", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "At the sentence-clause-word level, Menzerath's law largely holds, regardless of corpus size and typological or genealogical properties of language. It is not clear what is special about ten (or seven, depending on how one counts) languages that do not demonstrate an expected correlation. It can be noticed that four (or three) of them are ancient languages: Latin, Old East Slavic, Old French (and Sanskrit), but there are other ancient languages (e.g. Old Church Slavonic or Classical Chinese) that conform to Menzerath's law.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Clink and Lau (2020), analyzing primate communication, reach a somewhat similar conclusion: Menzerath's law holds in some cases, but not always (though in their study the adherence rate is much lower). They hypothesize that while the pressure for efficiency may facilitate compliance with Menzerath's law, other pressures may affect communication, sometimes to the extent that the law no longer holds. It is not, however, clear, which pressures could affect, for instance, non-Menzerathian Finnish and Icelandic, but not Menzerathian Estonian and Norwegian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "One potential confound is register, or genre (Hou et al., 2017) . It is a question for future research to what extent Menzerath's law is robust to genre (and if it is not, why).", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 63, |
|
"text": "(Hou et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The explanation of why Menzerath's law (largely) holds is still wanting. The shared-element account that I propose seems to explain some cases, but not all, and it is not clear whether it is the sole reason. This hypothesis can potentially be further explored by using enhanced dependencies available in some UD treebanks (e.g. by measuring whether the shorter length of coordinated clauses can be \"compensated\" by taking into account shared dependents and elided verbs, or whether xcomp is shorter solely because it does not have an internal subject). Overall, it may be useful to consider whether Menzerath's law should be explained by level-specific factors, general optimization principles, or both.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "At the clause-word-grapheme level, Menzerath's law generally does not hold. The observation by Altmann (1980) that the relationship is probably not monotonic turns out to be at least partly true. In the vast majority of cases, the mean word length as a function of number of words follows one of the two patterns: either L-shaped (steep decrease and then an almost flat line) or U-shaped (decrease and increase). L-shaped cases can be said to adhere to Menzerath's law, but first, they are less frequent than U-shaped ones, second, not all of them demonstrate a strong negative correlation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 109, |
|
"text": "Altmann (1980)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Again, there does not seem to be any obvious way to explain the observed variance by different properties of languages or treebanks. Writing system may potentially be a confound. Apart from alphabets, the writing systems represented in the sample include (impure) abjads (vowels are omitted or partly omitted; e.g. Arabic), abugidas (consonant-vowel units are based on a consonant letter; vowel notation is secondary; e.g. Hindi) and logographic scripts (e.g. Mandarin). Japanese is a special case, using a mixture of a syllabary (kana) and a logographic script (kanji). However, if the writing system plays a role, its contribution is inconsistent: (Mandarin) Chinese and Classical Chinese do not conform to Menzerath's law, while Cantonese does; Hindi (Devanagari, abugida) does not, while Amharic (Ge'ez script, abugida) does.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It can be argued that graphemic word length is not the most adequate measure, and that phonemic length should be preferred. These measures, however, tend to be strongly correlated (Piantadosi et al., 2011) . Moreover, should Menzerath's law hold for phonemes, it would probably mean that there is some kind of optimization pressure due to which it emerges in oral speech. But then it is very likely that the same pressure would also affect written language (and most of the analyzed corpora contain predominantly written texts) and the law should hold for graphemes, too.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 205, |
|
"text": "(Piantadosi et al., 2011)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "An anonymous reviewer raises two more important concerns. First, it can be questioned whether Menzerath's law should actually hold for corpora and not texts (cf. a discussion about inter-and intratextual laws by Grzybek and Stadlober (2011) ). Given that the law is formulated as a relationship between the length of a unit and a sub-unit, and that it is hypothesized to emerge due to some kind of optimization pressure, I do not see any reason to assume that it should be valid only for texts and not for any sample of units, provided that the sample is large and representative. For any corpus, it can of course be questioned whether it is large and representative enough, but usually corpora tend to do better on these two scales than single texts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 240, |
|
"text": "Grzybek and Stadlober (2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The second concern is that Menzerath's law may not be valid if the unit, the sub-unit and the subsub-unit are not at the adjacent levels of the hierarchy. It can be argued that by testing the law on clause-word-grapheme, I am hopping over a level, since grapheme is not an immediate constituent of a word, and instead syllables or morphemes should be used. It is, however, unclear, which is the more appropriate unit, syllable or morpheme (or whether the law should work equally well for both). Furthermore, it is likely that graphemic (and phonemic) length is highly correlated with both syllabic and morphemic length. (To give an example: I measured the Spearman's correlation coefficient between the graphemic and the morphemic length of Swedish words, using the CoDeRooMor dataset (Volodina et al., 2021) : r = 0.83, p < 0.001.) Note also that the robustness analyses described in Section 6 suggest that while adding or removing hierarchical levels (e.g. removing clause or adding phrase) affects the results, it does not change the overall picture. Nonetheless, this is a reasonable concern, and it would of course be beneficial to reproduce this study with syllable or morpheme as a sub-sub-unit. The problem is that the necessary resources are lacking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 785, |
|
"end": 808, |
|
"text": "(Volodina et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Unlike Stave et al., I do not test for the role of Zipf's abbreviation law. For sentence-clause-word, this hardly is possible, since clauses are not repeated in languages often enough to enable frequency estimates. For clause-word-grapheme, I cannot propose an explicit prediction for the role of clause length that could have been tested by a regression model (see Section 5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 22, |
|
"text": "Stave et al., I", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "To conclude, Menzerath's law does not seem to be universal. It does not hold at some levels of analysis, and even at those where it does, some languages (or at least corpora) are exceptions. The reasons for that (both compliance and non-compliance) are not fully clear. Further studies should focus on explanatory approaches and on reproducing the existing results on larger and better samples. 1 Table 3 : Results across languages (OCS = Old Church Slavonic, OES = Old East Slavic). For sentenceclause-word analysis: r = Spearman's correlation coefficient, p = corresponding p-value, range = maximum sentence length (in clauses) for which 50 datapoints are available. For clause-word-grapheme analysis: trend = the shape of the curve, min = clause length for which the shortest mean word length is observed, range = maximum clause length (in words) for which 50 datapoints are available. For languages with DOWN, UP or FLAT trend, asterisk marks those where |r| \u2265 0.70 or |r| \u2265 0.30 and p \u2264 0.05. Corpus size is given in K words, families are denoted by ISO-639 codes. There are no ISO-639 codes for Koreanic (the code for Korean is used) and Tai-Kadai (the code for the Tai branch is used).", |
|
"cite_spans": [ |
|
{ |
|
"start": 395, |
|
"end": 396, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 404, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The research presented here has been enabled by the Swedish national research infrastructure Nationella spr\u00e5kbanken, funded jointly by the Swedish Research Council (2018-2024, contract 2017-00626) and the 10 participating partner institutions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Appendix B. The procedure for determining the shape of the curveThe formal procedure of determining the shape of the curve (\"trend\") for the clause-word-grapheme (reported in Table 3 in Appendix A) was as follows. The extrema (maximum and minimum) of the curve were identified. Then four points (first point, the smallest clause length; maximum; minimum; last point, the largest clause length) were compared by means of t-tests between adjacent pairs of points. In many cases, there were actually only three or even two points, because either maximum or minimum (or both) coincided with either first or last point (or both). Thus, the number of t-tests varied from one to three (Bonferroni correction for multiple comparisons was applied). If p-value was smaller than 0.05 and the absolute value of Cohen's d (effect size) was larger than 0.20, then the difference was considered to be large enough to label the corresponding part of the curve as going either DOWN or UP, otherwise it was ignored. If there were no differences at all, the whole curve was labelled as FLAT.Bear in mind that the procedure is descriptive rather than inferential (even though it uses inferential statistics as a technique). It is approximately equivalent to manually classifying the patterns, but relies on formalized criteria and thus is more reproducible. See main text for the reasons why more sophisticated inferential tests were not applied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 182, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Prolegomena to Menzerath's law. Glottometrika", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Altmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Altmann. 1980. Prolegomena to Menzerath's law. Glottometrika, 2:1-10.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Arens'\u00abVerborgene Ordnung\u00bb und das Menzerathsche Gesetz", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Altmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Allgemeine Sprachwissenschaft, Sprachtypologie und Textlinguistik", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Altmann. 1983. H. Arens'\u00abVerborgene Ordnung\u00bb und das Menzerathsche Gesetz. In Manfred Faust, Roland Harweg, Werner Lehfeldt, and G\u00f6tz Wienold, editors, Allgemeine Sprachwissenschaft, Sprachtypologie und Textlinguistik, pages 31-39. Gunter Narr, T\u00fcbingen.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The parameters of the Menzerath-Altmann law in genomes", |
|
"authors": [ |
|
{ |
|
"first": "Jaume", |
|
"middle": [], |
|
"last": "Baixeries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoni", |
|
"middle": [], |
|
"last": "Hern\u00e1ndez-Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N\u00faria", |
|
"middle": [], |
|
"last": "Forns", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Ferrer-I-Cancho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "20", |
|
"issue": "2", |
|
"pages": "94--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaume Baixeries, Antoni Hern\u00e1ndez-Fern\u00e1ndez, N\u00faria Forns, and Ramon Ferrer-i-Cancho. 2013. The parameters of the Menzerath-Altmann law in genomes. Journal of Quantitative Linguistics, 20(2):94-104.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Zipf's law of abbreviation as a language universal", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Bentz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Ferrer-I-Cancho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Leiden workshop on capturing phylogenetic algorithms for linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Bentz and Ramon Ferrer-i-Cancho. 2016. Zipf's law of abbreviation as a language universal. In Proceedings of the Leiden workshop on capturing phylogenetic algorithms for linguistics, pages 1-4. University of T\u00fcbingen.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Menzerath-Altmann law for syntactic structures in Ukrainian", |
|
"authors": [ |
|
{ |
|
"first": "Solomija", |
|
"middle": [], |
|
"last": "Buk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrij", |
|
"middle": [], |
|
"last": "Rovenchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Glottotheory", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "10--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Solomija Buk and Andrij Rovenchak. 2008. Menzerath-Altmann law for syntactic structures in Ukrainian. Glot- totheory, 1(1):10-17.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A quantitative probe into the hierarchical structure of written Chinese", |
|
"authors": [ |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the First Workshop on Quantitative Syntax (Quasy, SyntaxFest 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heng Chen and Haitao Liu. 2019. A quantitative probe into the hierarchical structure of written Chinese. In Proceedings of the First Workshop on Quantitative Syntax (Quasy, SyntaxFest 2019), pages 25-32, Paris, France, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Adherence to Menzerath's law is the exception (not the rule) in three duetting primate species", |
|
"authors": [ |
|
{ |
|
"first": "Dena", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Clink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allison", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Royal Society Open Science", |
|
"volume": "7", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dena J. Clink and Allison R. Lau. 2020. Adherence to Menzerath's law is the exception (not the rule) in three duetting primate species. Royal Society Open Science, 7(11):201557.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The parameters of the Altmann-Menzerath law", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Cramer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "41--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irene Cramer. 2005. The parameters of the Altmann-Menzerath law. Journal of Quantitative Linguistics, 12(1):41-52.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "When is Menzerath-Altmann law mathematically trivial? A new approach", |
|
"authors": [ |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Ferrer-I-Cancho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoni", |
|
"middle": [], |
|
"last": "Hern\u00e1ndez-Fern\u00e1ndez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaume", |
|
"middle": [], |
|
"last": "Baixeries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "D\u0119bowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e1n", |
|
"middle": [], |
|
"last": "Ma\u010dutek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Statistical Applications in Genetics and Molecular Biology", |
|
"volume": "13", |
|
"issue": "6", |
|
"pages": "633--644", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramon Ferrer-i-Cancho, Antoni Hern\u00e1ndez-Fern\u00e1ndez, Jaume Baixeries, \u0141ukasz D\u0119bowski, and J\u00e1n Ma\u010dutek. 2014. When is Menzerath-Altmann law mathematically trivial? A new approach. Statistical Applications in Genetics and Molecular Biology, 13(6):633-644.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Do we have problems with Arens' law? A new look at the sentenceword relation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Grzybek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ernst", |
|
"middle": [], |
|
"last": "Stadlober", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Exact Methods in the Study of Language and Text: Dedicated to Gabriel Altmann on the Occasion of his 75th Birthday", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "203--215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Grzybek and Ernst Stadlober. 2011. Do we have problems with Arens' law? A new look at the sentence- word relation. In Peter Grzybek and Reinhard K\u00f6hler, editors, Exact Methods in the Study of Language and Text: Dedicated to Gabriel Altmann on the Occasion of his 75th Birthday, pages 203-215. De Gruyter Mouton.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Gelada vocal sequences follow Menzerath's linguistic law", |
|
"authors": [ |
|
{ |
|
"first": "Morgan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Gustison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Semple", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramon", |
|
"middle": [], |
|
"last": "Ferrer-I-Cancho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thore", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bergman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "113", |
|
"issue": "19", |
|
"pages": "2750--2758", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morgan L. Gustison, Stuart Semple, Ramon Ferrer-i-Cancho, and Thore J. Bergman. 2016. Gelada vocal se- quences follow Menzerath's linguistic law. Proceedings of the National Academy of Sciences, 113(19):E2750- E2758.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A study on correlation between Chinese sentence and constituting clauses based on the Menzerath-Altmann law", |
|
"authors": [ |
|
{ |
|
"first": "Renkui", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hue", |
|
"middle": [ |
|
"San" |
|
], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongchao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "350--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Renkui Hou, Chu-Ren Huang, Hue San Do, and Hongchao Liu. 2017. A study on correlation between Chinese sentence and constituting clauses based on the Menzerath-Altmann law. Journal of Quantitative Linguistics, 24(4):350-366.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Parameter interpretation of the Menzerath law: evidence from Serbian", |
|
"authors": [ |
|
{ |
|
"first": "Emmerich", |
|
"middle": [], |
|
"last": "Kelih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Text and Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmerich Kelih. 2010. Parameter interpretation of the Menzerath law: evidence from Serbian. In Peter Grzybek, Emmerich Kelih, and J\u00e1n Ma\u010dutek, editors, Text and Language, pages 71-80, Wien. Presens Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Zur Interpretation des Menzerathschen Gesetzes", |
|
"authors": [ |
|
{ |
|
"first": "Reinhard", |
|
"middle": [], |
|
"last": "K\u00f6hler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reinhard K\u00f6hler. 1984. Zur Interpretation des Menzerathschen Gesetzes [On the interpretation of the Menzerath's law].", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Menzerath-Altmann law in syntactic dependency structure", |
|
"authors": [ |
|
{ |
|
"first": "J\u00e1n", |
|
"middle": [], |
|
"last": "Ma\u010dutek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radek\u010dech", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Mili\u010dka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fourth International Conference on Dependency Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "100--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00e1n Ma\u010dutek, Radek\u010cech, and Ji\u0159\u00ed Mili\u010dka. 2017. Menzerath-Altmann law in syntactic dependency structure. In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017), pages 100-107, Pisa, Italy, September. Link\u00f6ping University Electronic Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Menzerath-Altmann law and prothetic /v/ in spoken Czech", |
|
"authors": [ |
|
{ |
|
"first": "J\u00e1n", |
|
"middle": [], |
|
"last": "Ma\u010dutek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Chrom\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michaela", |
|
"middle": [], |
|
"last": "Ko\u0161\u010dov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "26", |
|
"issue": "1", |
|
"pages": "66--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00e1n Ma\u010dutek, Jan Chrom\u00fd, and Michaela Ko\u0161\u010dov\u00e1. 2019. Menzerath-Altmann law and prothetic /v/ in spoken Czech. Journal of Quantitative Linguistics, 26(1):66-80.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Menzerath's law: The whole is greater than the sum of its parts", |
|
"authors": [ |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "Mili\u010dka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "85--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ji\u0159\u00ed Mili\u010dka. 2014. Menzerath's law: The whole is greater than the sum of its parts. Journal of Quantitative Linguistics, 21(2):85-99.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Word lengths are optimized for efficient communication", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Piantadosi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Tily", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "108", |
|
"issue": "9", |
|
"pages": "3526--3529", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communica- tion. Proceedings of the National Academy of Sciences, 108(9):3526-3529.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Zipf's word frequency law in natural language: A critical review and future directions", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Piantadosi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Psychonomic bulletin & review", |
|
"volume": "21", |
|
"issue": "5", |
|
"pages": "1112--1130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven T Piantadosi. 2014. Zipf's word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review, 21(5):1112-1130.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The Menzerath-Altmann law in translated texts as compared to the original texts", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Roukk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Exact Methods in the Study of Language and Text: Dedicated to Gabriel Altmann on the Occasion of his 75th Birthday", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "605--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Roukk. 2011. The Menzerath-Altmann law in translated texts as compared to the original texts. In Peter Grzybek and Reinhard K\u00f6hler, editors, Exact Methods in the Study of Language and Text: Dedicated to Gabriel Altmann on the Occasion of his 75th Birthday, pages 605-610. De Gruyter Mouton.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Simmons", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leif", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Nelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uri", |
|
"middle": [], |
|
"last": "Simonsohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Psychological Science", |
|
"volume": "22", |
|
"issue": "11", |
|
"pages": "1359--1366", |
|
"other_ids": { |
|
"PMID": [ |
|
"22006061" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn. 2011. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11):1359- 1366. PMID: 22006061.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Optimization of morpheme length: a cross-linguistic assessment of Zipf's and Menzerath's laws", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludger", |
|
"middle": [], |
|
"last": "Paschen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Pellegrino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Seifart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Linguistics Vanguard", |
|
"volume": "7", |
|
"issue": "s3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Stave, Ludger Paschen, Fran\u00e7ois Pellegrino, and Frank Seifart. 2021. Optimization of morpheme length: a cross-linguistic assessment of Zipf's and Menzerath's laws. Linguistics Vanguard, 7(s3):20190076.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Clause length and Menzerath's law. Glottometrika", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Teupenhayn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Altmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "127--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Teupenhayn and Gabriel Altmann. 1984. Clause length and Menzerath's law. Glottometrika, 6:127-138.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Statistical inference enables bad science; statistical thinking enables good science", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Tong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "The American Statistician", |
|
"volume": "73", |
|
"issue": "sup1", |
|
"pages": "246--261", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Tong. 2019. Statistical inference enables bad science; statistical thinking enables good science. The American Statistician, 73(sup1):246-261.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "CoDeRooMor: A new dataset for non-inflectional morphology studies of Swedish", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Volodina, Yousuf Ali Mohammed, and Therese Lindstr\u00f6m Tiedemann. 2021. CoDeRooMor: A new dataset for non-inflectional morphology studies of Swedish. In Proceedings of the 23rd Nordic Conference on Com- putational Linguistics (NoDaLiDa), pages 178-189, Reykjavik, Iceland (Online), May 31-2 June. Link\u00f6ping University Electronic Press, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Syntactic units and structures", |
|
"authors": [ |
|
{ |
|
"first": "Relja", |
|
"middle": [], |
|
"last": "Vulanovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reinhard", |
|
"middle": [], |
|
"last": "K\u00f6hler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Quantitative linguistics: An international handbook", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "274--291", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Relja Vulanovic and Reinhard K\u00f6hler. 2005. Syntactic units and structures. In Reinhard K\u00f6hler, Gabriel Altmann, and Rajmund Piotrowski, editors, Quantitative linguistics: An international handbook, pages 274-291. Walter de Gruyter, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Is the Menzerath-Altmann law specific to certain languages in certain registers?", |
|
"authors": [ |
|
{ |
|
"first": "Lirong", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lianzhen", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "27", |
|
"issue": "3", |
|
"pages": "187--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lirong Xu and Lianzhen He. 2020. Is the Menzerath-Altmann law specific to certain languages in certain regis- ters? Journal of Quantitative Linguistics, 27(3):187-203.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Universal Dependencies 2.8.1. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Faculty of Mathematics and Physics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Zeman, Joakim Nivre, et al. 2021. Universal Dependencies 2.8.1. LINDAT/CLARIAH-CZ digital li- brary at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Examples of correlation between sentence length (in clauses) and mean clause length (in words). Error bars show interquartile range. Sentence lengths with fewer than 50 datapoints were excluded", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "(a) Bambara. Downward trend (L-shape) (b) Indonesian. Downward trend (c) Latvian. Down-and-up trend (d) Uyghur. Flat line (e) Kazakh. Upward trend (f) Persian. Up-and-down trend", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Examples of correlation between clause length (in words) and average word length (in graphemes). Error bars show interquartile range. Clause lengths with fewer than 50 datapoints were excluded picture.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"text": "Summary of the correlation tests across languages at different levels. The total amount of languages may vary across levels, since languages which do not have enough datapoints are excluded from the analysis Section 4.2 describes an additional analysis that seeks to explore whether Menzerath's at the sentenceclause-word level can be explained by the fact that clauses share elements. Section 6 describes additional robustness analyses.", |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">negative positive none</td></tr><tr><td>sentence-clause-word</td><td>68</td><td>0</td><td>10</td></tr><tr><td>clause-word-grapheme</td><td>12</td><td>29</td><td>37</td></tr><tr><td>sentence-word-grapheme</td><td>26</td><td>19</td><td>30</td></tr><tr><td>sentence-clause-phrase</td><td>38</td><td>5</td><td>35</td></tr><tr><td>clause-phrase-word</td><td>58</td><td>2</td><td>18</td></tr><tr><td>phrase-word-grapheme</td><td>11</td><td>22</td><td>45</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td colspan=\"4\">continued from previous page</td><td/></tr><tr><td/><td colspan=\"3\">sentence-clause-word</td><td colspan=\"3\">clause-word-grapheme</td><td colspan=\"2\">general info</td></tr><tr><td>language</td><td>r</td><td>p</td><td colspan=\"2\">range trend</td><td>min</td><td colspan=\"3\">range corpus size family</td></tr><tr><td>Lithuanian</td><td colspan=\"3\">-1.00 <0.001 7</td><td colspan=\"2\">down-up 5</td><td>14</td><td>75</td><td>ine</td></tr><tr><td>Maltese</td><td colspan=\"3\">-1.00 <0.001 8</td><td>down</td><td>6</td><td>15</td><td>44</td><td>afa</td></tr><tr><td>Manx</td><td>0.50</td><td>1.000</td><td>3</td><td colspan=\"2\">down-up 5</td><td>10</td><td>21</td><td>ine</td></tr><tr><td>North Sami</td><td colspan=\"2\">-0.80 0.333</td><td>4</td><td colspan=\"2\">down-up 4</td><td>10</td><td>27</td><td>urj</td></tr><tr><td>Norwegian</td><td colspan=\"2\">-0.93 0.002</td><td>8</td><td colspan=\"2\">down-up 3</td><td>26</td><td>667</td><td>ine</td></tr><tr><td>OCS</td><td colspan=\"2\">-0.82 0.034</td><td>7</td><td>down*</td><td>10</td><td>13</td><td>58</td><td>ine</td></tr><tr><td>OES</td><td colspan=\"2\">-0.18 0.713</td><td>7</td><td colspan=\"2\">down-up 6</td><td>20</td><td>180</td><td>ine</td></tr><tr><td>Old French</td><td colspan=\"2\">-0.29 0.556</td><td>7</td><td colspan=\"2\">down-up 8</td><td>17</td><td>171</td><td>ine</td></tr><tr><td>Persian</td><td colspan=\"2\">-0.88 0.003</td><td>9</td><td colspan=\"2\">up-down 1</td><td>29</td><td>655</td><td>ine</td></tr><tr><td>Polish</td><td colspan=\"3\">-1.00 <0.001 9</td><td colspan=\"2\">down-up 4</td><td>22</td><td>499</td><td>ine</td></tr><tr><td>Portuguese</td><td colspan=\"2\">-0.93 0.001</td><td>9</td><td>down</td><td>21</td><td>34</td><td>571</td><td>ine</td></tr><tr><td>Romanian</td><td colspan=\"2\">-0.86 0.001</td><td>11</td><td colspan=\"2\">down-up 5</td><td>32</td><td>938</td><td>ine</td></tr><tr><td>Russian</td><td colspan=\"2\">-0.87 0.001</td><td>11</td><td colspan=\"2\">down-up 5</td><td>28</td><td>1421</td><td>ine</td></tr><tr><td>Sanskrit</td><td colspan=\"2\">-0.70 0.233</td><td>5</td><td>down</td><td>8</td><td>10</td><td>29</td><td>ine</td></tr><tr><td>Scottish Gaelic</td><td colspan=\"2\">-0.03 1.000</td><td>6</td><td colspan=\"2\">down-up 3</td><td>19</td><td>72</td><td>ine</td></tr><tr><td>Serbian</td><td colspan=\"3\">-1.00 <0.001 7</td><td colspan=\"2\">down-up 3</td><td>20</td><td>98</td><td>ine</td></tr><tr><td>Slovak</td><td colspan=\"2\">-0.90 0.083</td><td>5</td><td colspan=\"2\">down-up 4</td><td>16</td><td>106</td><td>ine</td></tr><tr><td>Slovenian</td><td colspan=\"3\">-1.00 <0.001 7</td><td colspan=\"2\">down-up 3</td><td>20</td><td>170</td><td>ine</td></tr><tr><td>Spanish</td><td colspan=\"3\">-0.99 <0.001 11</td><td>down</td><td>22</td><td>37</td><td>1015</td><td>ine</td></tr><tr><td>Swedish</td><td colspan=\"2\">-1.00 0.083</td><td>4</td><td colspan=\"2\">down-up 4</td><td>14</td><td>207</td><td>ine</td></tr><tr><td>Tamil</td><td colspan=\"2\">-1.00 0.083</td><td>4</td><td>down</td><td>5</td><td>10</td><td>12</td><td>dra</td></tr><tr><td>Thai</td><td colspan=\"3\">-1.00 <0.001 7</td><td colspan=\"2\">down-up 4</td><td>14</td><td>22</td><td>(tai)</td></tr><tr><td>Turkish</td><td colspan=\"2\">-0.45 0.267</td><td>8</td><td>down</td><td>4</td><td>20</td><td>592</td><td>trk</td></tr><tr><td>Turkish German</td><td colspan=\"2\">-1.00 0.003</td><td>6</td><td>down*</td><td>14</td><td>14</td><td>37</td><td>ine</td></tr><tr><td>Ukrainian</td><td colspan=\"3\">-1.00 <0.001 7</td><td colspan=\"2\">down-up 3</td><td>19</td><td>122</td><td>ine</td></tr><tr><td>Upper Sorbian</td><td colspan=\"2\">-1.00 0.333</td><td>3</td><td>up</td><td>5</td><td>11</td><td>11</td><td>ine</td></tr><tr><td>Urdu</td><td colspan=\"2\">-1.00 0.017</td><td>5</td><td colspan=\"2\">down-up 6</td><td>30</td><td>138</td><td>ine</td></tr><tr><td>Uyghur</td><td colspan=\"2\">-1.00 0.017</td><td>5</td><td>flat</td><td>1</td><td>12</td><td>40</td><td>trk</td></tr><tr><td>Vietnamese</td><td colspan=\"2\">-1.00 0.017</td><td>5</td><td>down</td><td>7</td><td>8</td><td>44</td><td>aav</td></tr><tr><td>Welsh</td><td colspan=\"2\">-1.00 0.017</td><td>5</td><td colspan=\"2\">down-up 6</td><td>17</td><td>37</td><td>ine</td></tr><tr><td>West. Armenian</td><td colspan=\"2\">-0.86 0.024</td><td>7</td><td colspan=\"2\">down-up 6</td><td>14</td><td>36</td><td>ine</td></tr><tr><td>Wolof</td><td colspan=\"3\">-1.00 <0.001 8</td><td colspan=\"2\">down-up 2</td><td>13</td><td>44</td><td>nic</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |