|
{ |
|
"paper_id": "K18-1038", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:09:46.567115Z" |
|
}, |
|
"title": "Vectorial semantic spaces do not encode human judgments of intervention similarity", |
|
"authors": [ |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Merlo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Geneva", |
|
"location": { |
|
"addrLine": "5 Rue de Candolle", |
|
"postCode": "CH-1211", |
|
"settlement": "Gen\u00e8ve 4" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Ackermann", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Geneva", |
|
"location": { |
|
"addrLine": "5 Rue de Candolle", |
|
"postCode": "CH-1211", |
|
"settlement": "Gen\u00e8ve 4" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Despite their practical success and impressive performances, neural-network-based and distributed semantics techniques have often been criticized as they remain fundamentally opaque and difficult to interpret. In a vein similar to recent pieces of work investigating the linguistic abilities of these representations, we study another core, defining property of language: the property of long-distance dependencies. Human languages exhibit the ability to interpret discontinuous elements distant from each other in the string as if they were adjacent. This ability is blocked if a similar, but extraneous, element intervenes between the discontinuous components. We present results that show, under exhaustive and precise conditions, that one kind of word embeddings and the similarity spaces they define do not encode the properties of intervention similarity in long-distance dependencies, and that therefore they fail to represent this core linguistic notion.", |
|
"pdf_parse": { |
|
"paper_id": "K18-1038", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Despite their practical success and impressive performances, neural-network-based and distributed semantics techniques have often been criticized as they remain fundamentally opaque and difficult to interpret. In a vein similar to recent pieces of work investigating the linguistic abilities of these representations, we study another core, defining property of language: the property of long-distance dependencies. Human languages exhibit the ability to interpret discontinuous elements distant from each other in the string as if they were adjacent. This ability is blocked if a similar, but extraneous, element intervenes between the discontinuous components. We present results that show, under exhaustive and precise conditions, that one kind of word embeddings and the similarity spaces they define do not encode the properties of intervention similarity in long-distance dependencies, and that therefore they fail to represent this core linguistic notion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Despite their practical success and impressive performances, neural-network-based and distributed semantics techniques have often been criticized as they remain fundamentally opaque and difficult to interpret.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To cast light on what linguistic information is learnt and encoded in these representations, several pieces of work have recently studied core properties of language in syntax (Linzen et al., 2016; Bernardy and Lappin, 2017; Gulordava et al., 2018; Linzen and Leonard, 2018; van Schijndel and Linzen, 2018) , semantics (Herbelot and Ganesalingam, 2013; Erk, 2016) , morphology (Cotterell and Sch\u00fctze, 2015) . In a similar vein, we study another core, defining property of human languages: the property of long-distance dependencies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 197, |
|
"text": "(Linzen et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 224, |
|
"text": "Bernardy and Lappin, 2017;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 248, |
|
"text": "Gulordava et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 274, |
|
"text": "Linzen and Leonard, 2018;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 306, |
|
"text": "van Schijndel and Linzen, 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 352, |
|
"text": "(Herbelot and Ganesalingam, 2013;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 363, |
|
"text": "Erk, 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 406, |
|
"text": "(Cotterell and Sch\u00fctze, 2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Human languages exhibit the ability to interpret discontinuous elements distant from each other in the string as if they were adjacent. 1 Sentence (1a) is a question about the object of the verb buy, whose canonical position is shown in angle brackets, thus connecting the first and last element in the sentence. 2 Sentence (2a) is a relative clause where the object of the verb wash is also the semantic object of the verb show, connecting two distant elements. Sentence (3a) is also a relative clause where the word\u00e9tudiant (student) is the semantic object of the verb endort (put to sleep).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1a) What do you wonder John bought <what> ? (2a) Show me the elephant that the lion is washing <the elephant>.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3a) Jules sourit aux\u00e9tudiants que l'orateur endort <\u00e9tudiants> s\u00e9rieusement depuis le d\u00e9but.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "'Jules smiles to the students who the speaker is putting seriously to sleep from the beginning.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Long-distance dependencies are not all equally acceptable. The precise description of the facts involving long-distance dependencies is complex, and is one of the major topics of research in current linguistic theory, with many competing proposals 1 To clarify the perhaps confusing terminology: the term long-distance dependencies is a technical term that refers to discontinuous constructions where two elements in the string receive the same interpretation. Long-distance dependency constructions are wh-questions, relative clauses, right-node raising, among others (Rimell et al., 2009; Nivre et al., 2010; Merlo, 2015) . Not all long-distance are actually long, for example subject-oriented relative clauses, and not all long dependencies are long-distance dependencies, for example, long subject-verb agreement as studied in Linzen et al. (2016) ; Bernardy and Lappin (2017) ; Gulordava et al. (2018) is usually not considered a long-distance dependency. 2 The unpronounced element(s) in the long-distance relation are indicated by < >. (Rizzi, 1990; Gibson, 1998) . We will adopt an intuitive and simple explanation, called intervention theory, some aspects of which will be explained in more detail below (Rizzi, 1990 (Rizzi, , 2004 . In a nutshell, a long-distance dependency between two elements in a sentence is difficult or even impossible if a similar element intervenes. For example, sentence (1a) is acceptable while (2a) causes trouble for children (Friedmann et al., 2009) and 3atriggers agreement errors, because in (1a) there is no sufficiently similar intervener (John is animate and is not a question word while what introduces a question and is not animate), while in (2) and 3there is (lion is animate like elephant and\u00e9tudiants (students) is animate like orateur (speaker)).", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 249, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 590, |
|
"text": "(Rimell et al., 2009;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 610, |
|
"text": "Nivre et al., 2010;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 623, |
|
"text": "Merlo, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 851, |
|
"text": "Linzen et al. (2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 880, |
|
"text": "Lappin (2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 906, |
|
"text": "Gulordava et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 962, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1043, |
|
"end": 1056, |
|
"text": "(Rizzi, 1990;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1070, |
|
"text": "Gibson, 1998)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1213, |
|
"end": 1225, |
|
"text": "(Rizzi, 1990", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1226, |
|
"end": 1240, |
|
"text": "(Rizzi, , 2004", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1465, |
|
"end": 1489, |
|
"text": "(Friedmann et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present results that show, under precise conditions, that one kind of word embeddings and the similarity spaces they define do not encode the notion of intervention similarity involved in longdistance dependencies, but probably only semantic associations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "All languages allow some form of long-distance dependencies under restrictive conditions: for example, (1a) is allowed, but (1b) is not allowed (sentences like (1b) are called weak islands, we keep this terminology), 3 (2a) is hard for children, while (2b) is not, and neither of them is hard for adults, (3a), repeated here as (3b) often triggers agreement mistakes, as shown.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(1b) * What do you wonder who bought <what>?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(2b) Show me the elephant that <the elephant> is washing the lion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(3b) Jules sourit aux\u00e9tudiants que l'orateur <\u00e9tudiants> endort/*endorment <\u00e9tudiants> s\u00e9rieusement depuis le d\u00e9but.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "'Jules smiles to the students who the speaker is/*were putting seriously to sleep from the beginning.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Core to the explanation of these facts is the notion of intervener. An intervener is an element that is similar to the two elements that are in a long-distance relation, and structurally intervenes 3 As always, * means ungrammatical. a. What do you wonder who bought? b. Which book do you wonder who bought? c. Which book do you wonder which linguist bought? Figure 1 : Weak islands (< means better). Acceptability judgments: c < b < a. between the two, blocking the relation. In our examples, potential interveners are shown in bold. 4 This explains why (1a) is ok, since there is a potential intervener, but John and what are not similar, but (1b) is not ok, since there is an intervener, and who and what are similar, as they are both whwords. Sentence (2a) is hard for children as the lion intervenes between the two positions that give meaning to the elephant, but sentence (2b) is not, because nothing intervenes. Sentence (3b) triggers agreement mistakes because the intermediate position of\u00e9tudiants intervenes between the word and the verb, causing interference.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 199, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 536, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 367, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Detailed investigations have shown that longdistance dependencies exhibit gradations of acceptability depending on which features are involved (Rizzi, 2004; Grillo, 2008; Friedmann et al., 2009) . For example, all other things being equal, in complex question environments (weak islands), we have the gradation of judgments shown in Figure 1 , where long-distance dependency involving a lexically restricted wh-phrase (which book or which linguist) is more acceptable than extraction of a bare wh-element (who or what), which is not very good. Experiments on weak islands and relative clauses also show that number triggers intervention effects (Belletti et al., 2012; Bentea, 2016) . Thus, results from theoretical linguistics, acquisition and sentence processing point to a definition of intervener based on 4 Notice that here and in all the following, intervention is defined structurally and not linearly. Linear intervention that does not structurally hierarchically dominate (technically c-command) does not matter as shown by the contrast *When do you wonder who won?/You wonder who won at five compared to When did the uncertainty about who won dissolve?/The uncertainty about who won dissolved at five. (Rizzi, 2013) Also, intervention can be visible in the string, like in (1) and (2), or understood, as in (3). The intermediate step in relating the two elements of the long-distance dependency in (3) is postulated on theoretical grounds (see for example (Chomsky, 2001) , and receives confirmation by participial agreement in languages like French (Kayne, 1989) , or the agreement mistakes in the article we use here (Franck et al., 2015) . See also Gibson and Warren (2004) syntactically-relevant features. 5 The status of a lexical-semantic feature such as animacy remains more controversial; some results argue in favor of an ameliorative effect (Brandt et al., 2009) , some suggest animacy has no effect (Adani, 2012) . Some recent studies show a clear effect of animacy as an intervention feature in wh-islands (Franck et al., 2015; Villata and Franck, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 156, |
|
"text": "(Rizzi, 2004;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 170, |
|
"text": "Grillo, 2008;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 194, |
|
"text": "Friedmann et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 668, |
|
"text": "(Belletti et al., 2012;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 682, |
|
"text": "Bentea, 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 811, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1212, |
|
"end": 1225, |
|
"text": "(Rizzi, 2013)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1466, |
|
"end": 1481, |
|
"text": "(Chomsky, 2001)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1560, |
|
"end": 1573, |
|
"text": "(Kayne, 1989)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1629, |
|
"end": 1650, |
|
"text": "(Franck et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1662, |
|
"end": 1686, |
|
"text": "Gibson and Warren (2004)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1720, |
|
"end": 1721, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1861, |
|
"end": 1882, |
|
"text": "(Brandt et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1920, |
|
"end": 1933, |
|
"text": "(Adani, 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 2028, |
|
"end": 2049, |
|
"text": "(Franck et al., 2015;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2050, |
|
"end": 2075, |
|
"text": "Villata and Franck, 2016)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 333, |
|
"end": 341, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We are going to focus on those features for which relevant data is available, and there's reason to think they could be captured in lexical (semantic) vectors because they are properties of words (in contrast to the more discourse-oriented features, such as +Top.) In particular, we focus on lexical restriction, number and animacy in the definition of intervention similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Sophisticated definition of lexical proximity in feature spaces, called word embeddings, have been defined recently in computational linguistics. These embeddings are the vectorial representation of the meaning of a word, defined as the usage of a word in its context (Wittgenstein, 1953 (Wittgenstein, [2001 ; Harris, 1954; Firth, 1957) . Tasks that confirm this interpretation are association, analogy, lexical similarity, entailment (Mikolov et al., 2013a,b; Pennington et al., 2014; Bojanowski et al., 2016; Henderson and Popa, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 287, |
|
"text": "(Wittgenstein, 1953", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 308, |
|
"text": "(Wittgenstein, [2001", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 324, |
|
"text": "Harris, 1954;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 337, |
|
"text": "Firth, 1957)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 461, |
|
"text": "(Mikolov et al., 2013a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 486, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 511, |
|
"text": "Bojanowski et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 537, |
|
"text": "Henderson and Popa, 2016)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We can, therefore, investigate whether the similarity spaces defined by word embeddings capture the notion of intervention similarity at work in long-distance dependencies. If they do, this means that they encode this core linguistic notion; if they don't this means that word embeddings semantic spaces capture association-based similarities based on world knowledge and textual cooccurrence, but not this more syntax-internal notion of intervention similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Long-distance phenomena and word embeddings", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We investigate whether the popular notion of word embeddings and the notion of vector space similarity built on it are sensitive to the linguistic properties that are used to describe long-distance phenomena. These properties are the explanatory variables of the observed grammaticality judg- 5 Villata (2017, 8) summarizes that the relevant features have been identified as being morphosyntactic features that have the potential to trigger movement, such as [+Q], for whelements, [+R(el)], for the head of the relative clause, [+Top] , for the elements in a topic position, [+Foc], for the focalized elements, and the [+N] feature associated with lexically restricted wh-elements (e.g., which NP). ments derived by intuitive or experimental acceptability judgments. If word embeddings encode the linguistic properties that explain grammaticality judgment in long-distance dependencies, then they should also be effective predictors of the grammaticality of these same sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 294, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 534, |
|
"text": "[+Top]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 623, |
|
"text": "[+N]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "More precisely, let C and C be the two elements linked by a long-distance dependency in sentence F . Let I be the intervener. Let S(C, I) be a similarity score indicating how similar I is to C. 6 Let G F be a score representing the grammaticality of F , as measured numerically by psycholinguistic controlled experiments. Intervention locality theory tells us that high S(C, I) yields ungrammaticality. Then S(C, I) is correlated to G F .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We can encode this theory in vectorial space. Let w C be the word embedding of C and w I the word embedding of I. Let s(w C , w I ) be the similarity score S measured as a distance in vectorial space. Then s(w C , w I ) is correlated to G F , if the similarity notion encoded in word embeddings is the similarity notion that has been shown to be active in long-distance dependencies. If instead word embeddings do not encode an interventionsensitive notion of similarity, we should find no correlation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For example, consider the weak island examples in Figure 2 . Clearly, both the pair (class, student) and the pair (professor, student) are close in a semantic space that simply measures semantic field and association-based similarity. If however, word embeddings learn intervention-relevant notions of similarity, then (professor, student) should be more similar, since they are both animate, compared to (class, student), a pair with a mismatch in animacy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Note that it is crucial here to compute word embeddings in a way that does not encode grammatical, and especially syntactic, information in some other way, to control for effects of syntactic similarity. This could yield positive results for the wrong reasons. This is why we use syntax-lean vectors, as explained below, and not the more dynamic word embeddings calculated in the process of training a neural parser, for example, or a language model (Linzen et al., 2016; Bernardy and Lappin, 2017; Gulordava et al., 2018) . Which professor do you wonder which student appreciated?", |
|
"cite_spans": [ |
|
{ |
|
"start": 450, |
|
"end": 471, |
|
"text": "(Linzen et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 498, |
|
"text": "Bernardy and Lappin, 2017;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 522, |
|
"text": "Gulordava et al., 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Object relatives, NUMBER MATCH Jules sourit\u00e0 l'\u00e9tudiant que l' orateur <\u00e9tudiant>2 endort <\u00e9tudiant>1 s\u00e9rieusement depuis le d\u00e9but.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Jules smiles to the student who the speaker is putting seriously to sleep from the beginning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The question", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Jules sourit aux\u00e9tudiants que l' orateur <\u00e9tudiants>2 endorment <\u00e9tudiants>1 s\u00e9rieusement depuis le d\u00e9but.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object relatives, NUMBER MISMATCH", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Jules smiles to the students who the speaker is putting seriously to sleep from the beginning. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Object relatives, NUMBER MISMATCH", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In what follows, we describe the multiple steps necessary to construct the materials of our experiments. To verify our hypothesis, we need two sets of materials: the experimental measures reflecting the grammaticality of a sentence and the word embeddings to calculate a vector space of similarities. We describe these in turn. We refer to the sentences in Figure 2 as examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 365, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For grammaticality measures, we use the carefully controlled stimuli of three psycholinguistics experiments, kindly provided to us by S. Villata and J. Franck (Franck et al., 2015; Villata and Franck, 2016) . The language studied is French. Subjects were not the same across the tasks. Stimuli are exemplified in Figure 2 . From Franck et al. (2015) we only consider the first experiment, comprising 24 experimental items crossing structure (object relative clauses vs. complement clauses) and the number of the object (singular vs. plural). 7 7 All subject head nouns (e.g. orateur) were singular. Subjects and objects were all animate. An adverb followed by a locative phrase were added after the verb in order to measure potential spillover effects. All test sentences were grammatical with respect to subject-verb agreement. Each sentence was followed by a yes/no comprehension question that probed participants interpretation of the thematic relations in The experimental data is constituted by on-line reading times (milliseconds). Interference is examined on the agreement of the verb in the subordinate clause. We use the reading time corresponding to the critical region, the verb following the intervener word, endort or endorment in our examples in Figure 2 , as was done in the analysis of results in the original experiments. The results show a speed-up effect of number in number mismatches configurations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 180, |
|
"text": "(Franck et al., 2015;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 206, |
|
"text": "Villata and Franck, 2016)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 321, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1260, |
|
"end": 1268, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Materials", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "From Villata and Franck (2016) , we consider both experiments, both manipulating wh-islands. Experiment 1 manipulated the lexical restriction of the wh-elements (both bare vs. both lexically restricted), and the match in animacy between the extracted wh-element and the intervening wh-element (animacy match, where both are animate vs. animacy mismatch, where the extracted wh-element is inanimate and the intervening wh-element is animate). All verbs required animate subjects. Experiment 2 manipulated the lexical restriction of the wh-elements (both bare vs. both lexically restricted), and the reversibility of thematic roles (reversible vs. non-reversible). All wh-elements were animate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 30, |
|
"text": "Villata and Franck (2016)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The data collected are acceptability judgments collected off-line from several subjects, on a seven-point Likert scale. 8 The results show a clear effect of animacy match and reversibility of thematic role match for lexically restricted phrases and less so for bare wh-phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 121, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Notice that these stimuli ensure that the effects, or, more importantly, null effects, that we might find are not limited to a single type of construction and lexical relation, since we test two very different sets of constructions. In the same spirit of testing for a wide set of effects, in one case, we look at effects expressed as offline acceptability, and in the other at online reading times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Materials", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Calculating the word and phrase vectors The pairs of words or phrases indicated in bold in the examples in Figure 2 were used to collect the vector-based similarity space.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 115, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For each of these words we recover a word embedding. We use French word embeddings, from the sentence. Instructions encouraged both rapid reading and correctness in answering the questions (48 fillers, 72 subject).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Facebook Research. These publicly available vectors have been obtained on a 5-word window, for 300 resulting dimensions, on Wikipedia data using the skip-gram model described in Bojanowski et al. (2016) . 9 Every word is represented as an n-grams of characters, for n training between 3 and 6. Each n-gram is represented by a vector and the sum of these vectors forms the vector representing the given word. This technique has been conceived to account for morphological similarities between words. Taking into consideration the fact that words may share morphological properties can improve the quality of the embeddings, and is important in a language like French, that has rich nominal and verbal inflectional morphology. The quality of a sample of these embedding vectors were checked by the two authors, proficient in French, by verifying that the words that are proposed as similar are consistent with intuition. Figure 3 shows the most similar words for two of the words whose word embeddings we calculated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 202, |
|
"text": "Bojanowski et al. (2016)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 919, |
|
"end": 927, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As shown in the examples in Figure 2 , we need to measure the vector-based distance between phrases. Once the word vectors of individual words such as quel and professeur, are calculated, we calculate the embeddings of the noun phrases in which the single words combine, such as quel professeur. The vectorial representation of noun phrases is calculated by a composition operation. We used a simple vectorial sum. Since word embeddings are representations of lexical properties, we also report below results using only the bare head word of the noun phrase.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 36, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Calculating the similarity Once these vectors are calculated, we still have several options of which operator to use to calculate the distance between the vectors representing the two phrases C and I.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The similarity operators Beside the lexical specification of the vectors and their composition, the operator used to measure similarity also provides a dimension of experimental variation. The cosine is a well-known and efficient measure of vector similarity. It is based on a rescaling of the dot product of the vectors and it is a symmetric measure. It has been shown to capture associative and analogical semantic similarity in vector space POLICIER (policeman) cambrioleur (burglar) kidnappeur (kidnapper) chauffeur (driver) criminel (offender) d\u00e9tective (detective) ETUDIANT (student) enseignant (teacher) professeur (professor) chercheur (researcher) doctorant (doctoral student) camarade (fellow) Figure 3 : Five most similar words for word policier and\u00e9tudiant. (Mikolov et al., 2013a,b; Pennington et al., 2014; Bojanowski et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 795, |
|
"text": "(Mikolov et al., 2013a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 820, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 845, |
|
"text": "Bojanowski et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 704, |
|
"end": 712, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Once the distance between the vectors is calculated, in the final step, we correlate the calculated word embedding similarities with the psycholinguistic acceptability judgments. 10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Recall that in weak islands (see Figure 2) , the expected outcome is an inverse proportionality between the two variables: the higher the semantic similarity, the stronger the interference, and consequently, the lower the average acceptability score of the sentence. In the case of object relative clauses (see Figure 2 ), we expected to observe a direct proportionality between the two variables: the higher the semantic similarity, the stronger the interference, and consequently the longer the average reading time devoted to the verb in the relative clause. Figures 4a and 4b show the (lack of) correlations between s(w C , w I ) and the grammaticality judgments of the experiments on weak islands, both with bare nouns and composed noun phrases. Figures 5a and 5b show the (lack of) correlations between s(w C , w I ) and the reaction times of the critical region, the verb, both with bare nouns and composed noun phrases, in object relative clauses. Regression values are shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 42, |
|
"text": "Figure 2)", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 319, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 580, |
|
"text": "Figures 4a and 4b", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 770, |
|
"text": "Figures 5a and 5b", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 996, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Results clearly show no correlations in all conditions. This is converging evidence that word embeddings do not represent the intervention notion of similarity, but they encode similarities based on associations and world knowledge. More explicitly, take the two examples of weak islands in Figure 2 . Human judgments differentiate clearly the two sentences, the first being more acceptable than the second. In the first sentence, Quel cours te demandes-tu quel\u00e9tudiant a appr\u00e9ci\u00e9? (Which Table 1 : Regressions (m), correlations (Pearson r) and p-values. ss=semantic similarity (cosine); as=asymmetric similarity (lexical entailment); WI=weak island; OR=object relative clauses; b1/2=bare noun 1/2; whp1/2=wh-phrase 1/2; v=verb.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 299, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 496, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results with the cosine operator", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "class do you wonder which student appreciated?), the two target words, in bold, do not match in animacy, hence the intervener does not block the long-distance relation as strongly as in the second sentence, Quel professeur te demandes-tu que\u013a etudiant a appr\u00e9ci\u00e9? (Which professor do you wonder which student appreciated?), where they do. People are sensitive to this difference, even if cours, professor and\u00e9tudiant are all words belonging to the same semantic field and closely connected by semantic association. The word embeddings we have tested here fail to capture this difference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results with the cosine operator", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The lack of correlation prompts a more detailed analysis of the results. In particular, notice that in the experimental work a binary (not continuous) distinction -animate vs. inanimate, plural vs. singular -was manipulated and correlated to the acceptability and re-action times. We are, instead, requiring a correlation between similarity and acceptability in the animacy case and similarity and number in the reaction times. That is, we are imposing a stricter correspondence, which requires the level of similarity to continuously vary with all the experimental results. We verify then if weaker forms of correlation give us more positive results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "First of all, we can require the similarity measure to make only a binary distinction. For the experiment manipulating animacy in wh-islands, we do find the expected inverse correlation between mean similarity and mean acceptability depending on the value of the animacy factor. 11 For the experiment manipulating number in relative clauses, instead, we do not find the expected direct correlation between mean similarity and mean reading time depending on the value of the number factor. 12 Another less stringent way of looking for correspondences is to take the manipulated binary factor into account, and verify if there is a partial correlation. In both cases, the correlation is weak. 13", |
|
"cite_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 491, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "11 Animate relative head (match condition): mean similar-ity=0.394, mean acceptability=3.65; inanimate relative head (mismatch condition): mean similarity=0.293, mean accept-ability=4.00).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "12 Singular relative head (match condition): mean simi-larity=0.678, mean reading time=962.96; plural relative head (mismatch condition): mean similarity=0.705, mean reading time=896.03). Notice in fact, that the lack of correspondence could be even more basic, as the average similarity score for the number match condition is lower than for the number mismatch condition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "13 A multiple regression of accuracy on animacy and similarity yields accuracy= 0.46 anim=inanimate + 0.83 similarity + 3.33 with correlation coefficient 0.229; a multiple regression of reading times on number and similarity yields reading times= 72.48 num=plural + 203.53 similar- Results with asymmetric operator It could also be pointed out that while the null results were confirmed across construction types (weak islands and object relatives), experimental methodologies (off-line grammaticality judgments and online reading times), only the cosine operator was used to calculate similarity. The two vectors that are being compared, w C and w I , correspond, linguistically to C and I above. It has been shown that, from a linguistic point of view, the grammaticality judgments differ depending on whether the feature set of C is properly included or properly includes I. If the features of C are a superset of the features of I, sentences are judged more acceptable (Rizzi, 2004) . Independently of the exact details of the linguistic explanation, these finegrained differences in grammaticality judgments suggest that it might be more appropriate to calculate similarity with an asymmetric operator. The asymmetric measure we use here has been developed to capture the notion of entailment. It captures the idea that the values in a distributed semantic vector do not represent presence or absence of a property (true or false), but knowledge or lack of knowledge about a property of the referent entity of the noun whose meaning the vector represents: A entails B iff when I know A I know everything about B. This operator has been shown to learn the notion of hyponymy better than other methods (Henderson and Popa, 2016) . 14 Since this operator has so far only been applied to English, we need to develop the training and development sets for French. For our experiments, ity + 752.45, with correlation coefficient: -0.499.", |
|
"cite_spans": [ |
|
{ |
|
"start": 973, |
|
"end": 986, |
|
"text": "(Rizzi, 2004)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1705, |
|
"end": 1731, |
|
"text": "(Henderson and Popa, 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1734, |
|
"end": 1736, |
|
"text": "14", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "14 The operators are calculated by the following formula, where y, x are word embeddings vectors with length d, being we translated all the word pairs from English to French. 15 We kept the same configurations of the training sets of word pairs, as described in the experiments by Henderson and Popa. The system uses these pairs coupled with the gold answer (1 if the entailment is true, 0 if it is not) to train on hyponymy-hypernymy relations. The data used for training are noun-noun word pairs that include positive hyponymy pairs, negative pairs consisting of different hyponymy pairs reversed, pairs in other semantic relations, and some random pairs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 177, |
|
"text": "15", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We modify the operator (we use unk dup, > ), so that it does not to give us a binary decision (x entails y yes/no), but so that it outputs a real value, indicating how much x entails y, or rather how much x is asymmetrically similar to y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With this operator, we produce the results shown in Figures 6a and 6b , for bare noun phrases. 16 Figure 6a shows the (lack of) correlations between s(w C , w I ) and the grammaticality judgments of the experiments on weak islands. Figure 6b shows the (lack of) correlations between projected in a different space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 97, |
|
"text": "16", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 69, |
|
"text": "Figures 6a and 6b", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 107, |
|
"text": "Figure 6a", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 241, |
|
"text": "Figure 6b", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "log(P (y\u21d2x)) \u2248 (\u03c3(\u2212(y\u22121)) \u2022 log \u03c3(\u2212(x\u22121)) + \u03c3(\u2212(\u2212y\u22121)) \u2022 log \u03c3(\u2212(\u2212x\u22121)))/d (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The first dot product stands for the true-versus-unknown interpretation of the vectors and the second dot product represents the false-versus-unknown interpretation. \u03c3 is the logistic sigmoid function s(w C , w I ) and the reaction times of the critical region in object relative clauses. Regression values are shown in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 327, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These results also confirm a lack of correlation. The convergence of these results is important as null effects are always hard to confirm and explain, and care must be taken to show that alternative explanations are not possible. In this case, all experiments, across constructions (weak island and object relative clauses), across type of noun phrase (bare or composed), across measurement method of the experimental dependent variable (off-line grammaticality judgments and online reaction times), and across operators (symmetric and asymmetric) show a consistent lack of correlation between measurements collected in experiments that manipulated the similarity of the elements, and the notion of similarity encoded in word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This consistent lack of effect allows us to conclude that while current word embeddings, i.e. dictionaries in a multi-dimensional vectorial space, clearly encode a notion of similarity, as shown by many experiments on analogical tasks and textual and lexical similarity, they do not however encode the notion of similarity that has been shown in many human experiments to be at work and to be definitional in long-distance dependencies. They do not encode therefore this core notion of intervention similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This work is situated in a rich body of computational research that attempts to establish the boundaries of what distributed semantic representations and neural networks can learn. These studies have concentrated on structural grammatical competence, exemplified by long-distance agreement, a task thought to require hierarchical, and not only linear, information. The first study, (Linzen et al., 2016) , has tested recursive neural network (RNN) language models and found that RNNs can learn to predict English subjectverb agreement, if provided with explicit supervision. In a follow up paper, Bernardy and Lappin (2017) find that RNNs are better at long-distance agreement if they can use large vocabularies to form rich lexical representations to learn structural patterns. This finding suggests that RNNs learn syntactic patterns through rich lexical embeddings, based both on semantic and syntactic evidence. Gulordava et al. (2018) revisit previous work, and extend the work on long-distance agreement to four languages of different linguistic properties (Italian, English, Hebrew, Russian). They use the technique of developing counterfactual data, typical of theoretical and experimental work and already used for parsing in Gulordava and Merlo (2016) and train the system on nonsensical sentences. Their model makes accurate predictions and compares well with humans, thereby suggesting that the networks learn deeper grammatical competence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 403, |
|
"text": "(Linzen et al., 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 916, |
|
"end": 939, |
|
"text": "Gulordava et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1235, |
|
"end": 1261, |
|
"text": "Gulordava and Merlo (2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "On the linguistic and psycholinguistic side, this work contributes to the investigation of the formal encoding of long-distance dependencies, following the theoretical lines laid in the first formulation of intervention theory of long-distance dependencies (Rizzi, 1990) , made gradual and more finegrained in subsequent work (Rizzi, 2004) , and verified experimentally in both sentence processing and acquisition (Franck et al., 2015; Villata and Franck, 2016; Friedmann et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 270, |
|
"text": "(Rizzi, 1990)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 339, |
|
"text": "(Rizzi, 2004)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "(Franck et al., 2015;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 461, |
|
"text": "Villata and Franck, 2016;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 485, |
|
"text": "Friedmann et al., 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Human languages exhibit the ability to interpret discontinuous elements distant from each other in the string as if they were adjacent, but this longdistance relation can be disrupted by a similar intervening element. Speakers report lower acceptability and longer reading times. In this paper, we have presented results that show that the similarity spaces defined by one kind of word embeddings do not encode this notion of intervention similarity in long-distance dependencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Future work requires investigating more directly the grammatical aspects of the nature of the similar and dissimilar words in the embeddings and extend the experimentation to other kinds of vector spaces, a much larger dataset, and replication in more constructions and more languages. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "C and C are fundamentally the same, so we will consider only C here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Subjects (42) were instructed that there were no time constraints. The stimulus set consisted of 32 experimental items that gave rise to 128 sentences and 132 fillers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/facebookresearch/ fastText/blob/master/pretrained-vectors. md", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The list of words and the detailed experimental results are given in the supplementary materials.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1+exp(\u2212x) , and the log and \u03c3 functions are applied componentwise.15 We use WordReference online multilingual dictionary, available at www.wordreference.com.16 Given the null results discussed below, we do not test another configuration, where we would have used the entailment operator on the composed noun phrase stimuli.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Julie Franck and Sandra Villata for sharing the data they have collected in their experiments, and James Henderson and Diana Nicoleta Popa for sharing with us their hyponymy detection script.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "8" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Some notes on the acquisition of relative clauses: new data and open questions", |
|
"authors": [ |
|
{ |
|
"first": "Flavia", |
|
"middle": [], |
|
"last": "Adani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "ENJOY LINGUISTICS! Papers offered to Luigi Rizzi on the occasion of his 60th birthday", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Flavia Adani. 2012. Some notes on the acquisition of relative clauses: new data and open questions. In ENJOY LINGUISTICS! Papers offered to Luigi Rizzi on the occasion of his 60th birthday, pages 6-13. CISCLPress.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Does gender make a difference? Comparing the effect of gender on children's comprehension of relative clauses in Hebrew and Italian", |
|
"authors": [ |
|
{ |
|
"first": "Adriana", |
|
"middle": [], |
|
"last": "Belletti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naama", |
|
"middle": [], |
|
"last": "Friedmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Brunato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Rizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Lingua", |
|
"volume": "122", |
|
"issue": "10", |
|
"pages": "1053--1069", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adriana Belletti, Naama Friedmann, Dominique Brunato, and Luigi Rizzi. 2012. Does gender make a difference? Comparing the effect of gender on chil- dren's comprehension of relative clauses in Hebrew and Italian. Lingua, 122(10):1053-1069.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Intervention effects in language acquisition: the comprehension of A-bar dependencies in French and Romanian", |
|
"authors": [ |
|
{ |
|
"first": "Anamaria", |
|
"middle": [], |
|
"last": "Bentea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anamaria Bentea. 2016. Intervention effects in lan- guage acquisition: the comprehension of A-bar de- pendencies in French and Romanian. Ph.D. thesis, University of Geneva.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Using deep neural networks to learn syntactic agreement. Linguistic Issues in Language Technology", |
|
"authors": [ |
|
{ |
|
"first": "Jean-", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Bernardy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shalom", |
|
"middle": [], |
|
"last": "Lappin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Philippe Bernardy and Shalom Lappin. 2017. Us- ing deep neural networks to learn syntactic agree- ment. Linguistic Issues in Language Technology, 15(2):1-15.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The discourse bases of relativization: An investigation of young German and English-speaking children's comprehension of relative clauses", |
|
"authors": [ |
|
{ |
|
"first": "Silke", |
|
"middle": [], |
|
"last": "Brandt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Kidd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Lieven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Tomasello", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Cognitive Linguistics", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "539--570", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silke Brandt, Evan Kidd, Elena Lieven, and Michael Tomasello. 2009. The discourse bases of rela- tivization: An investigation of young German and English-speaking children's comprehension of rela- tive clauses. Cognitive Linguistics, 20(3):539-570.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Derivation by phase", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Life in Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Chomsky. 2001. Derivation by phase. In Michael Kenstowicz, editor, Ken Hale: A Life in Language, pages 1-52. MIT Press, Cambridge,MA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Morphological word-embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1287--1292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell and Hinrich Sch\u00fctze. 2015. Morpho- logical word-embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1287-1292, Denver, Colorado. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "What do you know about an alligator when you know the company it keeps?", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk. 2016. What do you know about an alligator when you know the company it keeps? Semantics and Pragmatics, 9(17):1-63.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A synopsis of linguistic theory 1930-1955", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Rupert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Firth", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "Studies in linguistic analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Rupert Firth. 1957. A synopsis of linguistic the- ory 1930-1955. In Studies in linguistic analysis, pages 1-32. Blackwell, Oxford.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Task-dependency and structure dependency in number interference effects in sentence comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Franck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Colonna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Rizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Frontiers in Psychology", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julie Franck, S. Colonna S., and Luigi Rizzi. 2015. Task-dependency and structure dependency in num- ber interference effects in sentence comprehension. Frontiers in Psychology, 6.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Relativized relatives: Types of intervention in the acquisition of A-bar dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Naama", |
|
"middle": [], |
|
"last": "Friedmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adriana", |
|
"middle": [], |
|
"last": "Belletti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Rizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Lingua", |
|
"volume": "119", |
|
"issue": "1", |
|
"pages": "67--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naama Friedmann, Adriana Belletti, and Luigi Rizzi. 2009. Relativized relatives: Types of intervention in the acquisition of A-bar dependencies. Lingua, 119(1):67 -88.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Linguistic complexity: Locality of syntactic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Cognition", |
|
"volume": "68", |
|
"issue": "", |
|
"pages": "1--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68:1-76.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Reading time evidence for intermediate linguistic structure in long-distance dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tessa", |
|
"middle": [], |
|
"last": "Warren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Syntax", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Gibson and Tessa Warren. 2004. Reading time evidence for intermediate linguistic structure in long-distance dependencies. Syntax, pages 55-78.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Generalized minimality: Syntactic underspecification in Broca's aphasia", |
|
"authors": [ |
|
{ |
|
"first": "Nino", |
|
"middle": [], |
|
"last": "Grillo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nino Grillo. 2008. Generalized minimality: Syntactic underspecification in Broca's aphasia. Ph.D. thesis, University of Utrecht.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Colorless green recurrent networks dream hierarchically", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Gulordava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1195--1205", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Multilingual dependency parsing evaluation: a large-scale analysis of word order properties using artificial data", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Gulordava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Merlo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Gulordava and Paola Merlo. 2016. Multi- lingual dependency parsing evaluation: a large-scale analysis of word order properties using artificial data. Transactions of the Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "Zellig", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A vector space for distributional semantics for entailment", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Popa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2052--2062", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Henderson and Diana Popa. 2016. A vector space for distributional semantics for entailment. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2052-2062, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Measuring semantic content in distributional vectors", |
|
"authors": [ |
|
{ |
|
"first": "Aur\u00e9lie", |
|
"middle": [], |
|
"last": "Herbelot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohan", |
|
"middle": [], |
|
"last": "Ganesalingam", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--445", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aur\u00e9lie Herbelot and Mohan Ganesalingam. 2013. Measuring semantic content in distributional vec- tors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 440-445, Sofia, Bul- garia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Romance clitics,verb movement and PRO", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Kayne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Linguistic Inquiry", |
|
"volume": "22", |
|
"issue": "4", |
|
"pages": "647--686", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Kayne. 1989. Romance clitics,verb movement and PRO. Linguistic Inquiry, 22(4):647-686.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Dupoux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "521--535", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Distinct patterns of syntactic agreement errors in recurrent networks and humans", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Leonard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 40th Annual Conference of the Cognitive Science Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proceedings of the 40th Annual Conference of the Cognitive Science Society.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Evaluation of two-level dependency representations of argument structure in longdistance dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Paola", |
|
"middle": [], |
|
"last": "Merlo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Third International Conference on Dependency Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paola Merlo. 2015. Evaluation of two-level depen- dency representations of argument structure in long- distance dependencies. In Proceedings of the Third International Conference on Dependency Linguis- tics (Depling 2015), pages 221-230.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their com- positionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Pro- ceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111- 3119.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Evaluation of dependency parsers on unbounded dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos G\u00f3mez", |
|
"middle": [], |
|
"last": "Rodr\u00edguez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "833--841", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos G\u00f3mez Rodr\u00edguez. 2010. Evaluation of de- pendency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 833-841, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Unbounded dependency recovery for parser evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Rimell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "813--821", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Pro- cessing, pages 813-821, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Relativized Minimality", |
|
"authors": [ |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Rizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luigi Rizzi. 1990. Relativized Minimality. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The cartography of syntactic structures, number 3 in Structures and beyond", |
|
"authors": [ |
|
{ |
|
"first": "Luigi", |
|
"middle": [], |
|
"last": "Rizzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "223--251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luigi Rizzi. 2004. Locality and left periphery. In Adriana Belletti, editor, The cartography of syntac- tic structures, number 3 in Structures and beyond, pages 223-251. Oxford University Press, New York.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Modeling garden path effects without explicit hierarchical syntax", |
|
"authors": [ |
|
{ |
|
"first": "Marten", |
|
"middle": [], |
|
"last": "Van Schijndel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 40th Annual Conference of the Cognitive Science Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syn- tax. In Proceedings of the 40th Annual Conference of the Cognitive Science Society.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Intervention effects in sentence processing", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Villata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Villata. 2017. Intervention effects in sentence processing. Ph.D. thesis, Universite de Geneve. Https://archive-ouverte.unige.ch/unige:101927.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Semantic similarity effects on weak islands acceptability", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Villata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Franck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "41st Incontro di Grammatica Generativa Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Villata and Julie Franck. 2016. Seman- tic similarity effects on weak islands acceptabil- ity. In 41st Incontro di Grammatica Genera- tiva Conference, Perugia, Italy. Https://archive- ouverte.unige.ch/unige:82418.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Philosophical Investigations", |
|
"authors": [ |
|
{ |
|
"first": "Ludwig", |
|
"middle": [], |
|
"last": "Wittgenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1953, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ludwig Wittgenstein. (1953) [2001]. Philosophical In- vestigations. Blackwell Publishing.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The linguistic constructions and experimental materials" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Weak islands, cosine operator." |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Object relative clauses, cosine operator." |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Asymmetric operator." |
|
}, |
|
"TABREF1": { |
|
"text": "Weak islands, ANIMACY MISMATCH Quel cours te demandes-tu quel\u00e9tudiant a appr\u00e9ci\u00e9?[+Q,+N,-A] [+Q,+N,+A] Which class do you wonder which student appreciated?Weak islands, ANIMACY MATCH Quel professeur te demandes-tu quel\u00e9tudiant a appr\u00e9ci\u00e9?[+Q,+N,+A] [+Q,+N,+A]", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |