ACL-OCL / Base_JSON /prefixK /json /konvens /2021.konvens-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:50.207957Z"
},
"title": "The Impact of Word Embeddings on Neural Dependency Parsing",
"authors": [
{
"first": "Benedikt",
"middle": [],
"last": "Adelmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fachbereich Informatik Universit\u00e4t Hamburg",
"location": {}
},
"email": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Menzel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fachbereich Informatik Universit\u00e4t Hamburg",
"location": {}
},
"email": ""
},
{
"first": "Heike",
"middle": [],
"last": "Zinsmeister",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Hamburg",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Using neural models to parse natural language into dependency structures has improved the state of the art considerably. These models heavily rely on word embeddings as input rep resentations, which raises the question whether the observed improvement is contributed by the learning abilities of the network itself or by the lexical information captured by means of the word embeddings they use. To answer this question, we conducted a series of experiments on German data from three different genres using artificial embeddings intentionally made uninformative in different ways. We found that without the context information provided by the embeddings, parser performance drops to that of conventional parsers, but not below. Ex periments with domainspecific embeddings, however, did not yield additional improve ments in comparison to largescale general purpose embeddings.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Using neural models to parse natural language into dependency structures has improved the state of the art considerably. These models heavily rely on word embeddings as input rep resentations, which raises the question whether the observed improvement is contributed by the learning abilities of the network itself or by the lexical information captured by means of the word embeddings they use. To answer this question, we conducted a series of experiments on German data from three different genres using artificial embeddings intentionally made uninformative in different ways. We found that without the context information provided by the embeddings, parser performance drops to that of conventional parsers, but not below. Ex periments with domainspecific embeddings, however, did not yield additional improve ments in comparison to largescale general purpose embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, using neural models has notably improved the accuracy of dependency parsing, compared to nonneural or 'conventional' statist ical parsers. However, while typical nonneural parsers normally have to extract all knowledge en coded in their models, including lexical inform ation, from the training data, i. e. a dependency treebank, neural dependency parsers are usually endowed with word embeddings in addition to the treebank, not only at training, but also at test time. Given that embeddings are highly informat ive about distributional properties of the embed ded entities (words in this case), which probably correlate with the possibility or plausibility of syn tactic relationships, and that they are generally trained on corpora orders of magnitude larger than the dependency treebanks available for any lan guage, this can be seen as an additional external source of information that conventional parsers do not have at their disposal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This raises the question of how much of the re ported difference, if any, is due to the neural model being better at modelling syntax and how much is just due to the information in the embeddings. On the one hand, one could argue that this distinction is irrelevant because the comparison reflects the way the systems would be used in practice. On the other hand, however, it is scientifically unsound to derive claims about capability differences of mod els or formalisms from experiments where more than just the model or formalism changes with re spect to a control setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, insight into the individual influ ence of model parts on the overall output (or at least its quality) can be seen as a step towards (some kind of) interpretability. Understanding the influence of embeddings is especially useful in lan guage processing, where most knowledge is sym bolic while neural networks necessarily operate on continuous representations. As it is embeddings of some kind that bridge this gap, systems should not be too dependent on their quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To gain more insight into this dependence of de pendency parsing on embeddings, we have con ducted experiments with a neural dependency parser provided with deterministically uninform ative as well as random word embeddings and we report on the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To our knowledge, the mechanisms leading to neural parsers exhibiting better performance than conventional ones have not yet been investigated. It has been shown that recurrent neural networks are able to capture syntactic structures such as nest ing in practice as long as the depth is bounded (Bhattamishra et al., 2020) , but this does not make a statement about whether or why they are better at it than conventional parsers, and it remains unclear what influence the input embeddings have on this capability.",
"cite_spans": [
{
"start": 295,
"end": 322,
"text": "(Bhattamishra et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The question how a model output changes when trained and evaluated on different input embed dings, specifically word embeddings, has been ad dressed by Rios and Lwowski (2020) . They train numerous word embeddings using Word2Vec, GloVe or fastText, each with various different initialization seeds and on different corpora, and compare the performance of models when using these different embeddings as input. We take a similar approach, except that we use 'artificial' embeddings, and while their focus is on the con sequences of embedding differences due to al gorithm and initialization, we are interested in the impact of the (distributional) semantics available through the embedding in the first place.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "Rios and Lwowski (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For a short period in time there were even some neural parsing architectures without (external) em beddings, such as the ISBN parser by Titov and Henderson (2007) . Its reported performance was well below what current parsers (with external em beddings) achieve, similar indeed to that of non neural parsers.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "Titov and Henderson (2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Within a more recent apporach, parsing per formance with and without external word em beddings has been compared by Kiperwasser and Goldberg (2016) , who mention a counterintuitive finding that external word embeddings degraded the performance of one of their parsers. In the small ablation study they report, however, the ad dition of external embeddings was accompanied by a change in parsing strategy (from graphbased to greedy transitionbased), not allowing for con clusions about the impact of the embeddings alone.",
"cite_spans": [
{
"start": 116,
"end": 147,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More generally, there has been growing interest in the relationship between embeddings and down stream tasks in recent years, usually with a fo cus on the knowledge possibly encoded in the em bedding, but also on how this knowledge and its representation affect further processing to which it is used as input. Much work on this topic has been concerned with sentence embeddings\u037e for ex ample, Miaschi et al. (2020) find a correlation between the amount of linguistic knowledge rep resented in a sentence embedding and its ability to solve a specific downstream task. They also provide evidence that finetuning the embedding makes it represent more taskspecific knowledge at the expense of general knowledge.",
"cite_spans": [
{
"start": 394,
"end": 415,
"text": "Miaschi et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A popular method for assessing what linguistic knowledge an embedding represents is probing tasks (a term that seems to have been coined by Conneau et al., 2018 , based on Adi et al., 2017 , and Shi et al., 2016 , classifiers trained to reconstruct known explicit linguistic properties from embed dings. In one sense, dependency parsing can be seen as a probing task where the linguistic prop erty to be extracted is the dependency structure of a sentence, and has indeed been used as a probing task (Miaschi et al., 2020\u037e Kunz and Kuhlmann, 2020) . However, 'viewing probing results in isol ation can lead to overestimating the linguistic cap abilities of a model' (Mosbach et al., 2020, p. 780) , and Kunz and Kuhlmann (2020) point out that in such scenarios, it is generally unknown to what ex tent the output is indeed present in and extracted from the embedding, as opposed to being learned by the model ('probe') built on top of it. They con sider embeddings to most likely lie between two extremes: no useful information being represen ted at all, or the information already being rep resented in a humanreadable way. Apart from re stricting the probing classifier to limited express iveness, one possibility of distinguishing embed ding from classifier power is therefore the compar ison with the results of probing baseline embed dings lacking any linguistic information content, a common choice being random ones. We too use randomness as one way to create such embeddings.",
"cite_spans": [
{
"start": 140,
"end": 160,
"text": "Conneau et al., 2018",
"ref_id": "BIBREF5"
},
{
"start": 161,
"end": 188,
"text": ", based on Adi et al., 2017",
"ref_id": null
},
{
"start": 189,
"end": 211,
"text": ", and Shi et al., 2016",
"ref_id": "BIBREF22"
},
{
"start": 500,
"end": 531,
"text": "(Miaschi et al., 2020\u037e Kunz and",
"ref_id": null
},
{
"start": 532,
"end": 547,
"text": "Kuhlmann, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 666,
"end": 696,
"text": "(Mosbach et al., 2020, p. 780)",
"ref_id": null
},
{
"start": 703,
"end": 727,
"text": "Kunz and Kuhlmann (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A study relating wordlevel probing tasks to higherlevel processing for several languages, in cluding dependency parsing for German, can be found in \u015eahin et al. (2020) . They report signific ant correlations between dependency parsing and morphosyntactic probing performance, suggesting that not only semantic, but also morphosyntactic information encoded in a word embedding can be influential. Note though that neural dependency parsing based on word embeddings is different from probing sentence embeddings for dependen cies of the encoded sentence. One could say that the situation is the converse: In the probing scen ario, the embedding is the result of a procedure and is probed to investigate its dependence on the ori ginal input. In our case, the embeddings are the input, and we want to investigate the dependence of the procedure on it. There are similar findings to the above for word embeddings, due to K\u00f6hn (2016) , attesting the choice of embeddings a no ticeable impact on parser performance.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "\u015eahin et al. (2020)",
"ref_id": null
},
{
"start": 917,
"end": 928,
"text": "K\u00f6hn (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As we cannot directly inspect what the neural ar chitecture learns and whether it is indeed better than 'conventional' (nonneural) architectures at learning the syntactic knowledge needed for pars ing, we employ a proxy question instead and ask how the output of a neural parser changes when depriving it of the knowledge encoded in the in put word embeddings, as these embeddings are an additional input that most conventional parsers do not have at their disposal. If the neural parser per forms significantly better than conventional pars ers when provided with the same input, its neural architecture is obviously a better learner of syntax than the architectures of the conventional parsers. On the other hand, if the neural parser needs more input (i. e. the embeddings) than the conventional parsers to outperform them, the comparison is in herently unfair as it is hardly surprising that a sys tem with more input can yield better predictions. While this does not necessarily rule out the pos sibility that the neural architecture is superior, the performance impact of eliminating a source of in formation sheds light on the dependence on that in formation. Such a dependence may be undesirable in certain contexts, such as lowresource settings where highquality word embeddings are unavail able.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "Another common scenario is that of domain ad aptation, where only a generic treebank of con siderable size is available for training, but spe cific embeddings can be obtained in an unsu pervised 1 way from indomain data (possibly the same data one wishes to parse later), which may be much smaller than the data employed for training generalpurpose embeddings. We complement our experiments on the impact of uninformative em beddings by also providing the parser with embed dings trained on the corpora from which we draw our test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "The parser we experiment with is Sticker (de Kok and P\u00fctz, 2020), a recent neural dependency parser treating parsing as a sequence labelling problem: Every token is assigned a complex tag encoding where to attach it. In the case of Sticker, the tags indicate the attachment point as its relative posi tion among tokens with a part of speech (e. g. 'the second finite verb to the left') and are computed by a neural network. (From the different archi tectural options we chose the LSTM architecture, which had turned out to work best on our data.) The only information that the neural network is provided with as input are embedding vectors of the tokens (words) in the sentence and of their part ofspeech (POS) tags. At training time, the parser trains the network based on these inputs (and the gold dependency structure and labels), but it does not alter the embeddings provided nor save any other lexical information about words in the train ing data\u037e in particular, there is no attempt to ob tain semantic knowledge about words not covered by the embedding. 2 This implies a substantial de pendence on those embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3.1"
},
{
"text": "As a conventional baseline we employ the five nonneural parsers from Adelmann et al. (2018a), excluding JWCDG, but only report the perform ance of the best parser per test text as reference. In all cases this was either Malt 3 (Nivre, 2003) with the 'Covington nonprojective' algorithm (Cov ington, 2001) or Mate 4 (Bohnet, 2010) .",
"cite_spans": [
{
"start": 227,
"end": 240,
"text": "(Nivre, 2003)",
"ref_id": "BIBREF19"
},
{
"start": 250,
"end": 304,
"text": "'Covington nonprojective' algorithm (Cov ington, 2001)",
"ref_id": null
},
{
"start": 315,
"end": 329,
"text": "(Bohnet, 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "3.1"
},
{
"text": "As Sticker cannot be run without word embeddings as input, we cannot entirely turn off this input, but we can substitute artificially created pseudo (or 'dummy') embeddings that are 'uninformative' in the sense that they do not encode any properties of the words beyond the word form identity (in par ticular, no semantics at all). We experiment with such uninformative embeddings created in differ ent ways, two of them deterministic and four ran dom (sampled with respect to different distribu tions, thus having different properties):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "empty: an embedding not containing any words at all. This will make any word form encountered by the parser outofvocabulary (just like rare word forms simply not covered by a 'normal' embed ding).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "zero: an embedding mapping every word to the zero vector (the vector containing only zer oes). The outofvocabulary words are therefore the same as for the informative control embedding (see further below), but as all of them are assigned the same vector, they are entirely indistinguishable when processing them only by means of their word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "cube: an embedding mapping every word to a vector with stochastically independent compon ents all uniformly distributed in the unit inter val [0, 1). In contrast to the previous embedding, words now have different vectors and are therefore distinguishable, but as the vectors are chosen at random, they are highly unlikely to correlate with any linguistic relation: They do not carry any se mantic information whatsoever.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "ccube: like cube, but shifted into the origin, i. e. with components drawn from [\u22120.5, 0.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "gauss: an embedding mapping every word to a standard normal random vector, i. e. a vector with stochastically independent components all follow ing a standard normal ('Gaussian') distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "sphere: an embedding mapping every word to a vector of length one (i. e. on the Euclidean unit sphere, hence the name), with every such vector having equal probability. In this embedding, any word vector can be separated from every other word vector by some hyperplane, so distinguish ing words should be especially easy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "As an informative control embedding we use the German word embedding released with the pretrained Sticker models. 5 Except for 'empty', which does not contain any vectors at all, all arti ficially created embeddings share dimension (300) and vocabulary with the control embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "For testing the influence of domainspecific em beddings, we train additional embeddings on texts sampled from the test corpora (see Section 3.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "As mentioned above, the parser also requires an embedding of the partofspeech (POS) tags present in the input. The control embedding here is based on one released with the pretrained Sticker models which embeds the STTS (Schiller et al., 1999) . 6 Additionally we created uninform ative embeddings of the same six types as above, again with vocabulary (tag inventory\u037e except for 'empty') and dimension (50) the same as in the control embedding.",
"cite_spans": [
{
"start": 220,
"end": 243,
"text": "(Schiller et al., 1999)",
"ref_id": "BIBREF21"
},
{
"start": 246,
"end": 247,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "However, we do not provide the parser with both uninformative word and uninformative POS em bedding, as the only input that the parser receives are embedded words and POS tags, so making both embeddings uninformative would actually decouple the parser from its input. 7 We have not 5 German word embeddings, trained on T\u00fcBaD/DP (de Kok and P\u00fctz, 2019), quantized using optimized product quantization: https://github.com/stickeritis/stickermodels/ releases/tag/destructgram20190426opq (September 16, 2019, last retrieved April 14, 2021) 6 With PAV instead of PROAV\u037e source of the original embedding: https://blob.danieldk.eu/stickermodels/ destructgramtags20190426.fifu (last retrieved May 14, 2021) 7 As a sanity check we did try that, obtaining UAS values between 17 % and 22 % and LAS values between 10 % and 16 %. Note that even in this scenario the parser still has ac tried combining the uninformative POS embed dings with the domainspecific word embeddings either. This leaves us with four types of neural parser configuration: With control word and control POS embedding (baseline), with uninformative word and control POS embedding, with control word and uninformative POS embedding, and with domainspecific word and control POS embed ding.",
"cite_spans": [
{
"start": 268,
"end": 269,
"text": "7",
"ref_id": null
},
{
"start": 282,
"end": 283,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "For every artificial embedding we train one model for the parser and the respective embedding on the first 91,999 sentences of part A of the Ham burg Dependency Treebank (Foth et al., 2014) , with the remaining 10,000 sentences (9.8 %) as validation set.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Foth et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uninformative Embeddings",
"sec_num": "3.2"
},
{
"text": "To obtain test data, three annotators manually annotated randomly drawn sentences from three different corpora. The first one is a corpus of 636 modern dystopias written by German writers. The second one is the dProse corpus (Gius et al., 2020) containing 2,529 literary German prose texts from between 1870 and 1920. The third one con sists of 8,788 documents downloaded from the internet, selected by the appearance of German keywords related to telemedicine (Franken and Adelmann, 2021) . The sentences sampled from each corpus were combined with the annotated sentences of the respective texts from Adelmann et al. (2018b). The three test sets comprise around 7,500 tokens and 450 sentences each, with sim ilar sentence length distributions (for details see Table 5\u037e this, as well as some other tables, can be found in the appendix).",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "(Gius et al., 2020)",
"ref_id": null
},
{
"start": 461,
"end": 489,
"text": "(Franken and Adelmann, 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 762,
"end": 770,
"text": "Table 5\u037e",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.3"
},
{
"text": "These datasets can be expected to notably differ both stylistically and thematically from the train ing data and between each other, without being in trinsically hard to annotate (and parse) like spoken or Twitter data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.3"
},
{
"text": "The three annotators annotated the texts with dependency relations following the guidelines of Foth (2006) , obtaining an overall interannotator reliability of Fleiss' = 0.89 for unlabelled at tachment accuracy and Fleiss' = 0.93 for la belled attachment accuracy on a balanced subset of about 20 % of the test data. The remaining data was distributed among the annotators (so that sen tences were annotated by only one annotator each) and subsequently postedited based on some heur istics for checking consistency.",
"cite_spans": [
{
"start": 95,
"end": 106,
"text": "Foth (2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.3"
},
{
"text": "The annotators only annotated dependencies. POS tags (required by all parsers as input), lem mata and morphological features (required by the nonneural parsers) were predicted by a tagger en semble. 8 This is in contrast to training time, where gold POS tags from the treebank were used. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Data",
"sec_num": "3.3"
},
{
"text": "To obtain domainspecific embeddings we trained word embeddings on samples of similar total token count as part A of the Hamburg Dependency Tree bank (approx. 1,872,622 tokens) from each of our test corpora, a reasonable order of magnitude for domainspecific data. The samples were chosen at random from the test corpora, taking care that no sentences used as test data were also selected as training data for the embeddings. Additionally, we sampled a collection of sentences, again of roughly the same total token count, from the union of all three test corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DomainSpecific Embeddings",
"sec_num": "3.4"
},
{
"text": "We assess performance differences by compar ing unlabelled and labelled attachment accuracy (also known as unlabelled and labelled attachment score, or UAS and LAS) with respect to our test data between the best conventional (nonneural) parser, the neural parser with the ('informative') control embeddings, and the neural parser with our manipulated (i. e. uninformative or domain specific) embeddings. For the webcrawling data, the bestperforming conventional parser was Malt\u037e for the other test sets, it was Mate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Usually, such attachment accuracies are com puted excluding punctuation since punctuation at tachment and labelling is considered trivial. This, however, may not be the case if uninformative em beddings make it hard for the parser to determ ine which tokens are in fact punctuation. For this reason, we treat punctuation like any other tokens and report attachment accuracies including punc tuation. Between 12 % and 17 % of the tokens in our test data are punctuation (according to auto matic POS tagging), so they also increase the ef fective amount of test data, and when excluding them, attachment accuracies are about 2 percent age points lower than those we report, for both the neural and the conventional baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "With the control embedding, the neural parser has a UAS 3 to 4 percentage points higher than the best conventional parser and an LAS 5 to 6 percentage points higher\u037e this is a considerable baseline differ ence. With uninformative word embeddings, this margin decreases by 1 to 3 percentage points in the case of UAS and by 1 to 7 percentage points for LAS, depending on test set and the type of un informative embedding. For instance, on the mod ern dystopias data with the 'cube' embedding, the UAS decreases from 0.93 to 0.90, and the LAS de creases from 0.91 to 0.84, the UAS reducing to and the LAS even falling short of Mate's performance (cf. Table 1 ). The other uninformative embeddings have less dramatic effects, giving values generally still above the conventional baseline. For all test sets, the embedding with the highest UAS and LAS is 'sphere', and the 'cube' embedding is among those with the smallest UAS and LAS. The other embeddings do not differ much from each other, their accuracies being mostly closer to those of the conventional than those of the neural baseline. Performance differences between test sets are sim ilar for the baseline models (both conventional and neural) and the models with uninformative embed dings.",
"cite_spans": [],
"ref_spans": [
{
"start": 649,
"end": 656,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Uninformative Word Embeddings",
"sec_num": "4.1"
},
{
"text": "As the UAS and LAS differences are small, we also tested for statistical significance, us ing the randomization test of Yeh (2000) (with 100,000 samples) because theoretical distributions are not known. Except for the 'sphere' embed ding tested on webcrawling data or the combina tion of all three, the pvalue for the hypothesis that the model performs as well as the neural baseline is below 5 %\u037e in the vast majority of cases, it is even below the stricter significance threshold of 0.25 % proposed by S\u00f8gaard et al. (2014) , so we can be confident that the models do indeed per form worse than the neural baseline. On the other hand, the pvalue for the hypothesis that the model performs as well as the conventional baseline is mostly not below the strict threshold, but below 5 % in more than half of the cases (see Table 4 ). The hypothesis cannot be rejected for the UAS of the 'cube' embedding (i. e. this embedding makes the neural parser perform no better than the best conventional parser, at least not with respect to Table 3 : Attachment accuracies for the domainspecific word embeddings, including punctuation head attachments), but it can be rejected (even with the stricter threshold) for the 'sphere' embedding (i. e. this embedding makes the neural parser still perform better than the best conventional parser). For the other embeddings, the picture is mixed. Even where the pvalue is below 5 %, it is not much lower, so one should be cautious about rejecting the null hypothesis.",
"cite_spans": [
{
"start": 504,
"end": 525,
"text": "S\u00f8gaard et al. (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 820,
"end": 827,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1029,
"end": 1036,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uninformative Word Embeddings",
"sec_num": "4.1"
},
{
"text": "For the uninformative POS embeddings, UAS and LAS values are higher than for the uninformat ive word embeddings. The 'ccube', 'gauss', and 'sphere' embedding even result in the same UAS as the control embedding (and so does the 'cube' embedding on the 19th century and webcrawling data). This is not very surprising since there are substantially fewer POS tags than words, and con sequently, uninformative POS embeddings mean less information loss than uninformative word em beddings. Still, performance decreases with re spect to the baseline can be observed over all test sets for the 'empty' and 'zero' embeddings, and for the other uninformative embeddings, there seems to be a tendency towards reductions in LAS (see Table 2 ). The increase in UAS from uninformative word to uninformative POS embeddings is smal ler (1.6 percentage points on average) than the in crease in LAS (2.6 percentage points on average), suggesting that there are in comparison more label errors when word embeddings are uninformative than when only POS embeddings are. Addition ally, all values across the board are better now than those of the conventional parsers. Correspondingly, pvalues (Table 9 ) do clearly not permit rejection of the hypothesis that the unin formative 'ccube', 'gauss', or 'sphere' embedding makes the neural parser perform worse than with the control embedding, and the hypothesis that the performance is only as good as that of the conven tional baseline can be rejected to the strict signi ficance level of 0.25 %. The latter is even true for the 'cube' embedding, while the pvalue for the test against the neural baseline LAS is also below 0.25 % for the dystopias and still below 5 % for the 19th century novels. The 'empty' and 'zero' em beddings exhibit mixed values. The pvalues are below 5 % when testing against either baseline (but mostly not below 0.25 % for the neural baseline), with values below the stricter threshold appear ing mostly for the LAS against the conventional parsers. Hence, here the assertion that the neural parser yields a better LAS than the conventional ones even with uninformative POS embeddings is more likely true than the corresponding one about the UAS. Apparently uninformative word embed dings have a stronger negative impact on LAS than uninformative POS embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 722,
"end": 729,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1173,
"end": 1181,
"text": "(Table 9",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Uninformative POS Embeddings",
"sec_num": "4.2"
},
{
"text": "A difference between the neural parser's perform ance with domainspecific word embeddings and 0.0430 0.0010 0.0170 0.0010 0.0010 0.0010 0.1360 0.0010 0.0010 0.0010 3.0180 0.0190 19th century 0.0960 0.0010 0.1520 0.0010 0.0110 0.0010 0.7470 0.0010 0.0130 0.0010 3.1170 0.0910 webcrawling 0.0200 0.0080 0.0040 0.0010 0.0040 0.0010 0.4580 0.3800 0.0160 0.0840 with the control embedding is almost nonexist ent, and the pvalues are never below the signi ficance threshold either. Conversely, they are al ways below the strict threshold for the hypothesis that the performance is not better than that of the conventional parser. While it is notable that even 'little' data the size of a dependency treebank (em beddings are usually trained on much bigger cor pora) are sufficient to create an embedding suffi ciently informative for the parser, 10 this does so far not facilitate insight into the role the embed ding may play in domain adaptation. We did not test for effects of the embeddings being used cross domain (e. g. the embedding trained on 19th cen tury novels being used for parsing webcrawling data) as the performance differences among the different embeddings for the same test set are small where present at all, so we expect differences between test sets to be largely due to other parser challenging aspects (such as general sentence com plexity).",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 358,
"text": "0.0430 0.0010 0.0170 0.0010 0.0010 0.0010 0.1360 0.0010 0.0010 0.0010 3.0180 0.0190 19th century 0.0960 0.0010 0.1520 0.0010 0.0110 0.0010 0.7470 0.0010 0.0130 0.0010 3.1170 0.0910 webcrawling 0.0200 0.0080 0.0040 0.0010 0.0040 0.0010 0.4580 0.3800 0.0160 0.0840",
"ref_id": null
}
],
"eq_spans": [],
"section": "DomainSpecific Word Embeddings",
"sec_num": "4.3"
},
{
"text": "Finally, we take a brief look at some individual de pendency labels. As pointed out in Adelmann et al. (2018a), overall attachment accuracies are skewed towards the performance on frequent phenomena such as determiner attachment, obfuscating issues with dependency relations that are of interest to content analyses, but appear less often. This eval uation only refers to the combination of all three test sets in the hope that as many labels as possible 10 We have not tested how well the domainspecific embed dings capture relationships between the embedded words.",
"cite_spans": [
{
"start": 455,
"end": 457,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LabelSpecific Evaluation",
"sec_num": "4.4"
},
{
"text": "will be frequent enough there to be meaningfully evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LabelSpecific Evaluation",
"sec_num": "4.4"
},
{
"text": "For a number of labels, attachment precision and recall changed by more than 10 percentage points when parsed with uninformative embed dings, compared to parsing with the control em bedding. Out of those, eleven appear more than 100 times in our test data\u037e Table 11 shows their at tachment precision and recall. Similarly great or in some cases even greater differences can also be observed for eleven other labels, but those are less frequent, some of them indeed very infrequent (e. g. there are only four occurrences of OBJG), so their values are probably unreliable. Among the frequent labels, heavy losses (up to 56 percentage points) can be observed for OBJD (dative object) and OBJP (prepositional object), mainly for the 'empty', 'zero' and 'cube' embeddings. OBJA (ac cusative object), PRED (predicative) and GMOD (genitive modifier) show losses mainly for these three embeddings, too, albeit not as big. With the 'ccube', 'gauss' and 'sphere' embeddings, losses are generally smaller, and for KOM (comparison), recall even rises with the 'ccube', 'gauss' and 'sphere' embedding.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 265,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "LabelSpecific Evaluation",
"sec_num": "4.4"
},
{
"text": "There are also three labels where almost no dif ference in precision and recall can be observed for the deterministically uninformative embeddings ('empty' and 'zero'), but for the other (the random) embeddings: APP (apposition), ROOT and S. The latter two are especially interesting as S denotes the root node of sentences (in HDT, this is usually the finite verb) and ROOT is the label used exclusively for punctuation. While the precision of ROOT is always 1.00 (when the parser assigns this label, it is always correct), recall drops from almost 1.00 by 13 to 14 percentage points for the 'cube', 'ccube' and 'gauss' embeddings, that is, with those embed dings the parser fails to correctly identify about 13 to 14 % of the punctuation tokens. This is strange and remarkable given that punctuation is trivially identified by its POS. The decrease does not occur for the deterministic embeddings, nor for 'sphere'. S exhibits a similar phenomenon, but there it is precision that drops while recall remains, meaning that the parser misidentifies something as a sen tence root. Table 12 shows precision and recall when pars ing with uninformative POS embeddings, for the same labels as above. As with UAS and LAS, dif ferences are less pronounced here, except for three labels when parsing with the deterministic embed dings: KOM shows a considerable increase in re call and OBJI (object infinitive) in precision, while ROOT decreases, again by 13 percentage points. This is complementary to the situation with un informative word embeddings, where ROOT does not decrease for these two embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 1079,
"end": 1087,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "LabelSpecific Evaluation",
"sec_num": "4.4"
},
{
"text": "For the sake of completeness we note that there were no particularly interesting label performance differences when parsing with the domainspecific embeddings (Table 13) .",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 169,
"text": "(Table 13)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "LabelSpecific Evaluation",
"sec_num": "4.4"
},
{
"text": "The main motivation for this paper was the ques tion of whether neural networks are better than conventional, nonneural architectures at learn ing the syntactic knowledge needed for parsing, as opposed to just having the advantage of be ing provided with extra information in the form of word embeddings, and we approached this us ing the proxy question of how the output of a neural parser changes when depriving it of this ex tra information. The answer to this question from our results can be framed in two ways, depend ing on the perspective: Even without access to the knowledge encoded in a word embedding, the neural parser still performs (at least) as well as the best nonneural parser, so this lack of know ledge does not impair it so much that a conven tional tool would be clearly preferable. Or altern atively: Without access to the knowledge encoded in a word embedding, the neural parser performs only about as well as the best nonneural parser, implying that it may indeed very well be the know ledge in the embedding that enables superior per formance, not a superiority of the architecture. 11 The results further suggest that a lack of word embedding knowledge abets label errors, while a lack of POS embeddings abets attachment errors, with a general tendency towards an increase in label errors in both cases. This could mean that knowledge about the cooccurrence of POS tags is more useful for predicting the correct head and knowledge about the cooccurrence of words is more useful for choosing the correct dependency label, which would not be implausible from a lin guistic point of view. More dedicated experiments are necessary, however, to corroborate this hypo thesis.",
"cite_spans": [
{
"start": 1109,
"end": 1111,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We also found that not all dependency labels are affected equally, the losses being concentrated mainly at 'contentrelated' labels such as OBJA (accusative object), with the especially vexing observation that uninformative word embeddings hinder the correct labelling of punctuation even though POS information should be sufficient to do so. A qualitative analysis of the label errors could be illuminative\u037e possible reasons for this oddity would have to be investigated in greater depth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The experiment with domainspecific embed dings was inconclusive, at least with the lim ited amount of domainspecific data used\u037e the differences in vocabulary and in word semantics between the corpora were possibly too small to have a noticeable impact on parsing. We do ob serve, though, that even embeddings trained on little data make the parser perform almost as well as the control embeddings trained on big data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Given this finding, subsequent research would have to dig further into the relationship between the size of the data used for training word embed dings and parser performance when using them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We conducted our experiments with only one single parser. To assess how well our results ap ply to neural dependency parsing in general, future work would have to examine other parsers as well, particularly ones built on other parsing paradigms such as transitionbased or graphbased parsing. It could furthermore be insightful to draw a compar ison with conventional parsers able to use word embeddings (e. g. RBGParser). .1890 0.0330 0.0050 0.0080 0.0010 0.0090 0.0010 0.0080 0.0010 (b) pvalues for the hypothesis that the results are not better than the performance of the respective best conventional parser (see Table 2 ) 0.0050 0.0010 0.0050 0.0010 0.0040 0.0010 0.0020 0.0010 19th century 0.0090 0.0010 0.0020 0.0010 0.0020 0.0010 0.0010 0.0010 webcrawling 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 all three 0.0350 0.0010 0.0150 0.0010 0.0160 0.0010 0.0210 0.0010 (b) pvalues for the hypothesis that the results are not better than the performance of the respective best conventional parser (see Table 3 ) Table 10 : pvalues (in %) for Yeh's randomized permutation test on performance differences between the domain specific embeddings and the two baselines. Values below the significance threshold of 5 % are marked in italics\u037e values below the stricter threshold of 0.25 % are additionally marked in bold. Values for the combination of all three corpora were computed on a subset of 461 sentences so that pvalues are comparable. Table 13 : Precision and recall for selected labels when parsing with the domainspecific word embeddings. The 'gold count' column gives the number of occurrences of the label in our test data. Values differing by more than 10 percentage points from the baseline are marked in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 422,
"end": 483,
"text": ".1890 0.0330 0.0050 0.0080 0.0010 0.0090 0.0010 0.0080 0.0010",
"ref_id": null
},
{
"start": 616,
"end": 623,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 626,
"end": 888,
"text": "0.0050 0.0010 0.0050 0.0010 0.0040 0.0010 0.0020 0.0010 19th century 0.0090 0.0010 0.0020 0.0010 0.0020 0.0010 0.0010 0.0010 webcrawling 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 all three 0.0350 0.0010 0.0150 0.0010 0.0160 0.0010 0.0210 0.0010",
"ref_id": null
},
{
"start": 1021,
"end": 1028,
"text": "Table 3",
"ref_id": null
},
{
"start": 1031,
"end": 1039,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 1456,
"end": 1464,
"text": "Table 13",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Or 'selfsupervised', referring to the fact that manual an notation effort is unnecessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Details given here that are not from the cited paper are from personal communication with Dani\u00ebl de Kok. 3 http://www.maltparser.org/ 4 https://code.google.com/archive/p/matetools/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "cess to sentence lengths, and POS tags are available when determining the attachment point based on the complex tag being predicted by the neural network, which itself does not have this information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See https://github.com/benadelm/hermAPipeline (last re trieved August 7, 2021). 9 Again, with PAV instead of PROAV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Of course, the mere ability to utilize word embeddings can be seen as an architectural superiority. This is not restricted to neural networks, though: RBGParser(Lei et al., 2014), too, can use word embeddings (cf.K\u00f6hn, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been funded by 'Landesforschungs f\u00f6rderung Hamburg' in the context of the hermA project (LFFFV 35). We would like to thank the reviewers for their thorough comments, Lea R\u00f6seler and Emily Roose for their invaluable an notation effort and Piklu Gupta for improving our English. All remaining errors are ours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "overall sentences tokens count token count text avg m stddev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evalu ation of OutofDomain Dependency Parsing for its Application in a Digital Humanities Project",
"authors": [
{
"first": "Melanie",
"middle": [],
"last": "Benedikt Adelmann",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Andresen",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Menzel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zinsmeister",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 14th Conference on Nat ural Language Processing",
"volume": "",
"issue": "",
"pages": "121--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benedikt Adelmann, Melanie Andresen, Wolfgang Menzel, and Heike Zinsmeister. 2018a. Evalu ation of OutofDomain Dependency Parsing for its Application in a Digital Humanities Project. In Proceedings of the 14th Conference on Nat ural Language Processing (KONVENS 2018), pages 121-135.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Manual Dependency Annotation of Three German Text Ex tracts from the Project hermA (Gold Standard Data)",
"authors": [
{
"first": "Melanie",
"middle": [],
"last": "Benedikt Adelmann",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Andresen",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Menzel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zinsmeister",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1324079"
]
},
"num": null,
"urls": [],
"raw_text": "Benedikt Adelmann, Melanie Andresen, Wolfgang Menzel, and Heike Zinsmeister. 2018b. Manual Dependency Annotation of Three German Text Ex tracts from the Project hermA (Gold Standard Data). doi:10.5281/zenodo.1324079.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Finegrained Ana lysis of Sentence Embeddings Using Auxiliary Pre diction Tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR Conference Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Finegrained Ana lysis of Sentence Embeddings Using Auxiliary Pre diction Tasks. In Proceedings of ICLR Conference Track, Toulon, France.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages",
"authors": [
{
"first": "Satwik",
"middle": [],
"last": "Bhattamishra",
"suffix": ""
},
{
"first": "Kabir",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Navin",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1481--1494",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.129"
]
},
"num": null,
"urls": [],
"raw_text": "Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. 2020. On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1481-1494.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Very High Accuracy and Fast De pendency Parsing is not a Contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceed ings of the 23rd International Conference on Com putational Linguistics",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Very High Accuracy and Fast De pendency Parsing is not a Contradiction. In Proceed ings of the 23rd International Conference on Com putational Linguistics, pages 89-97.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What you can cram into a single $&!#* vector: Prob ing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the As sociation for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1198"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Prob ing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Fundamental Algorithm for Dependency Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual ACM Southeast Conference",
"volume": "",
"issue": "",
"pages": "95--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Covington. 2001. A Fundamental Algorithm for Dependency Parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95-102.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Eine umfassende Constraint DependenzGrammatik des Deutschen",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Foth",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Foth. 2006. Eine umfassende Constraint DependenzGrammatik des Deutschen. Technical report, Universit\u00e4t Hamburg.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Because Size Does Matter: The Hamburg Dependency Treebank",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Foth",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
},
{
"first": "Niels",
"middle": [],
"last": "Beuck",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Menzel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Language Resources and Evaluation Confer ence 2014. European Language Resources Associ ation (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Foth, Arne K\u00f6hn, Niels Beuck, and Wolfgang Menzel. 2014. Because Size Does Matter: The Hamburg Dependency Treebank. In Proceedings of the Language Resources and Evaluation Confer ence 2014. European Language Resources Associ ation (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Web crawling zu Akzeptanzproblematiken der Telemedi zin",
"authors": [
{
"first": "Lina",
"middle": [],
"last": "Franken",
"suffix": ""
},
{
"first": "Benedikt",
"middle": [],
"last": "Adelmann",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4557100"
]
},
"num": null,
"urls": [],
"raw_text": "Lina Franken and Benedikt Adelmann. 2021. Web crawling zu Akzeptanzproblematiken der Telemedi zin. doi:10.5281/zenodo.4557100.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Simple and Accurate Dependency Parsing Using Bidirec tional LSTM Feature Representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00101"
]
},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirec tional LSTM Feature Representations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "T\u00fcBaD/DP Stylebook",
"authors": [
{
"first": "Kok",
"middle": [],
"last": "Dani\u00ebl De",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "P\u00fctz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani\u00ebl de Kok and Sebastian P\u00fctz. 2019. T\u00fcBaD/DP Stylebook. Technical report, Seminar f\u00fcr Sprach wissenschaft, University of T\u00fcbingen.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Selfdistillation for German and Dutch dependency parsing",
"authors": [
{
"first": "Kok",
"middle": [],
"last": "Dani\u00ebl De",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "P\u00fctz",
"suffix": ""
}
],
"year": 2020,
"venue": "Com putational Linguistics in the Netherlands Journal",
"volume": "10",
"issue": "",
"pages": "91--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dani\u00ebl de Kok and Tobias P\u00fctz. 2020. Selfdistillation for German and Dutch dependency parsing. In Com putational Linguistics in the Netherlands Journal, volume 10, pages 91-107.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Classifier Probes May Just Learn from Linear Context Fea tures",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Kunz",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5136--5146",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.450"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Kunz and Marco Kuhlmann. 2020. Classifier Probes May Just Learn from Linear Context Fea tures. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5136-5146.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Evaluating Embeddings using Syntaxbased Classification Tasks as a Proxy for Parser Performance",
"authors": [
{
"first": "Arne",
"middle": [],
"last": "K\u00f6hn",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Work shop on Evaluating Vector Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "67--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arne K\u00f6hn. 2016. Evaluating Embeddings using Syntaxbased Classification Tasks as a Proxy for Parser Performance. In Proceedings of the 1st Work shop on Evaluating Vector Space Representations for NLP, pages 67-71.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lowrank Tensors for Scor ing Dependency Structures",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xin",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Compu tational Linguistics",
"volume": "1",
"issue": "",
"pages": "1381--1391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Lowrank Tensors for Scor ing Dependency Structures. In Proceedings of the 52nd Annual Meeting of the Association for Compu tational Linguistics, volume 1, pages 1381-1391.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lin guistic Profiling of a Neural Language Model",
"authors": [
{
"first": "Alessio",
"middle": [],
"last": "Miaschi",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Brunato",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "745--756",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.65"
]
},
"num": null,
"urls": [],
"raw_text": "Alessio Miaschi, Dominique Brunato, Felice Dell'Orletta, and Giulia Venturi. 2020. Lin guistic Profiling of a Neural Language Model. In Proceedings of the 28th International Conference on Computational Linguistics, pages 745-756.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Closer Look at Linguistic Know ledge in Masked Language Models: The Case of Re lative Clauses in American English",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Mosbach",
"suffix": ""
},
{
"first": "Stefania",
"middle": [],
"last": "Degaetanoortlieb",
"suffix": ""
},
{
"first": "Marie",
"middle": [
"Pauline"
],
"last": "Krielke",
"suffix": ""
},
{
"first": "Badr",
"middle": [],
"last": "Abdullah",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computa tional Linguistics",
"volume": "",
"issue": "",
"pages": "771--787",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.67"
]
},
"num": null,
"urls": [],
"raw_text": "Marius Mosbach, Stefania DegaetanoOrtlieb, Marie Pauline Krielke, Badr Abdullah, and Dietrich Klakow. 2020. A Closer Look at Linguistic Know ledge in Masked Language Models: The Case of Re lative Clauses in American English. In Proceedings of the 28th International Conference on Computa tional Linguistics, pages 771-787.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An efficient algorithm for pro jective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT)",
"volume": "",
"issue": "",
"pages": "149--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2003. An efficient algorithm for pro jective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 149-160.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An Em pirical Study of the Downstream Reliability of Pre Trained Word Embeddings",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Lwowski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3371--3388",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.299"
]
},
"num": null,
"urls": [],
"raw_text": "Anthony Rios and Brandon Lwowski. 2020. An Em pirical Study of the Downstream Reliability of Pre Trained Word Embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3371-3388.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Guidelines f\u00fcr das Tagging deutscher Textcorpora mit STTS",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "St\u00f6ckert",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Thielen",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Schiller, Simone Teufel, Christine St\u00f6ckert, and Christine Thielen. 1999. Guidelines f\u00fcr das Tagging deutscher Textcorpora mit STTS. Technical report, Institut f\u00fcr maschinelle Sprachverarbeitung, Semi nar f\u00fcr Sprachwissenschaft, Stuttgart, T\u00fcbingen.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Does StringBased Neural MT Learn Source Syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does StringBased Neural MT Learn Source Syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "What's in a pvalue in NLP?",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Language Learning",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Martinez. 2014. What's in a pvalue in NLP? In Proceedings of the Eighteenth Conference on Computational Language Learning, pages 1-10.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Fast and Ro bust Multilingual Dependency Parsing with a Gener ative Latent Variable Model",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Nat ural Language Processing and Computational Nat ural Language Learning (EMNLPCoNLL)",
"volume": "",
"issue": "",
"pages": "947--951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and James Henderson. 2007. Fast and Ro bust Multilingual Dependency Parsing with a Gener ative Latent Variable Model. In Proceedings of the 2007 Joint Conference on Empirical Methods in Nat ural Language Processing and Computational Nat ural Language Learning (EMNLPCoNLL), pages 947-951.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "More accurate tests for the stat istical significance of result differences",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceed ings of the 18th International Conference on Compu tational Linguistics",
"volume": "",
"issue": "",
"pages": "947--953",
"other_ids": {
"DOI": [
"10.3115/992730.992783"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Yeh. 2000. More accurate tests for the stat istical significance of result differences. In Proceed ings of the 18th International Conference on Compu tational Linguistics, pages 947-953.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "LINSPECTOR: Multilingual Probing Tasks for Word Representations",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "G\u00f6zde G\u00fcl \u015eahin",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Computa tional Linguistics",
"volume": "46",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/coli_a_00376"
]
},
"num": null,
"urls": [],
"raw_text": "G\u00f6zde G\u00fcl \u015eahin, Clara Vania, Ilia Kuznetsov, and Iryna Gurevych. 2020. LINSPECTOR: Multilingual Probing Tasks for Word Representations. Computa tional Linguistics, 46(2):335-385. 22,218 1,383 16.065 13 12.684",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "88 0.82 0.92 0.89 0.89 0.84 0.89 0.84 0.89 0.84 0.91 0.87 0.90 0.86 0.91 0.87 19th century Mate 0.85 0.80 0.89 0.86 0.87 0.81 0.87 0.81 0.87 0.82 0.88 0.84 0.88 0.83 0.88 0.83 webcrawling Malt 0.85 0.80 0.90 0.87 0.87 0.83 0.86 0.82 0.87 0.83 0.89 0.85 0.88 0.85 0.89 0.85 all three Mate 0.86 0.80 0.90 0.87 0.88 0.83 0.87 0.82 0.88 0.83 0.89 0.85 0.88 0.85 0.89 0.85"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "0.88 0.82 0.92 0.89 0.91 0.88 0.91 0.88 0.91 0.88 0.91 0.89 0.91 0.88 0.91 0.88 19th century Mate 0.85 0.80 0.89 0.86 0.89 0.85 0.89 0.85 0.89 0.85 0.89 0.86 0.89 0.85 0.89 0.85 webcrawling Malt 0.85 0.80 0.90 0.87 0.89 0.86 0.88 0.86 0.90 0.87 0.90 0.87 0.90 0.86 0.90 0.87 all three Mate 0.86 0.80 0.90 0.87 0.89 0.87 0.89 0.87 0.90 0.87 0.90 0.87 0.90 0.86 0.90 0.87"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Precision and recall for selected labels when parsing with the uninformative word embeddings. The 'gold count' column gives the number of occurrences of the label in our test data. Values differing by more than 10 percentage points from the baseline are marked in bold. 75 0.88 0.69 0.93 0.70 0.92 0.75 0.88 0.76 0.86 0.76 0.88 0.74 0.88 GMOD 384 0.95 0.96 0.92 0.96 0.95 0.96 0.96 0.96 0.95 0.95 0.95 0.95 0.95 0.95 KOM 110 0.89 0.72 0.89 0.89 0.90 0.91 0.88 0.68 0.89 0.68 0.87 0.67 0.86 0.68 NEB 224 0.84 0.83 0.84 0.85 0.86 0.86 0.83 0.80 0.84 0.81 0.85 0.82 0.81 0"
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS dystopias Mate 0.90 0.85 0.93 0.91 0.91 0.87 0.91 0.87 0.90 0.84 0.91 0.86 0.90 0.86 0.92 0.88 19th century Mate 0.88 0.83 0.91 0.88 0.89 0.84 0.89 0.84 0.88 0.83 0.89 0.84 0.88 0.84 0.90 0.85 webcrawling Malt 0.87 0.83 0.91 0.88 0.88 0.85 0.88 0.85 0.88 0.84 0.89 0.86 0.88 0.86 0.90 0.87 all three Mate 0.88 0.83 0.92 0.89 0.90 0.85 0.89 0.85 0.89 0.84 0.90 0.85 0.89 0.85 0.91 0.87",
"content": "<table><tr><td>text</td><td>traditional Parser UAS</td><td>normal</td><td>empty</td><td>zero</td><td>cube</td><td>ccube</td><td>gauss</td><td>sphere</td></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"text": "Attachment accuracies for the uninformative word embeddings, including punctuation",
"content": "<table><tr><td>text</td><td>traditional Parser UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS normal empty zero cube ccube gauss sphere</td></tr><tr><td>dystopias</td><td>Mate 0.90 0.85 0.93 0.91 0.91 0.87 0.91 0.87 0.92 0.89 0.93 0.90 0.93 0.90 0.93 0.90</td></tr><tr><td colspan=\"2\">19th century Mate 0.88 0.83 0.91 0.88 0.89 0.85 0.90 0.86 0.91 0.86 0.91 0.88 0.91 0.88 0.91 0.88</td></tr><tr><td colspan=\"2\">webcrawling Malt 0.87 0.83 0.91 0.88 0.89 0.87 0.89 0.87 0.91 0.88 0.91 0.88 0.91 0.88 0.91 0.88</td></tr><tr><td>all three</td><td>Mate 0.88 0.83 0.92 0.89 0.90 0.87 0.90 0.87 0.91 0.88 0.92 0.89 0.92 0.89 0.92 0.89</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "Attachment accuracies for the uninformative POS embeddings, including punctuation",
"content": "<table><tr><td>text</td><td>traditional Parser UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS normal dystopias 19th century webcrawling total</td></tr><tr><td>dystopias</td><td>Mate 0.90 0.85 0.93 0.91 0.93 0.90 0.93 0.90 0.93 0.90 0.93 0.90</td></tr><tr><td colspan=\"2\">19th century Mate 0.88 0.83 0.91 0.88 0.90 0.87 0.91 0.88 0.91 0.87 0.91 0.87</td></tr><tr><td colspan=\"2\">webcrawling Malt 0.87 0.83 0.91 0.88 0.90 0.88 0.91 0.88 0.91 0.88 0.90 0.87</td></tr><tr><td>all three</td><td>Mate 0.88 0.83 0.92 0.89 0.91 0.88 0.91 0.89 0.91 0.88 0.91 0.88</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"html": null,
"text": "pvalues for the hypothesis that the results are not worse than Sticker's performance",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">7.0029 3.3850</td></tr><tr><td>all three</td><td colspan=\"7\">0.7350 0.0210 0.5000 0.0150 0.0570 0.0010 2.5480 0.0410 0.2090 0.0100 10.1369 1.8220</td></tr><tr><td colspan=\"2\">empty UAS (a) text LAS</td><td>zero UAS LAS</td><td>cube UAS</td><td>LAS</td><td>ccube UAS LAS</td><td>gauss UAS LAS</td><td>sphere UAS LAS</td></tr><tr><td>dystopias</td><td colspan=\"7\">4.5430 1.4080 8.9129 2.0470 26.1857 2.8720 2.4620 10.8949 44.2846 35.8266 0.1020 0.0020</td></tr><tr><td colspan=\"8\">19th century 4.8630 10.7799 3.7090 9.4529 17.6328 38.7696 1.3280 12.4229 14.2029 21.2568 0.3060 0.0840</td></tr><tr><td colspan=\"6\">webcrawling 1.5910 0.3840 3.4910 1.8410 6.4879 3.0440 0.1160 0.0120</td><td colspan=\"2\">1.8230 0.0450 0.0040 0.0020</td></tr><tr><td>all three</td><td colspan=\"7\">5.8059 3.7340 7.2999 4.9590 25.4797 41.4556 2.2270 3.7790 15.4428 8.3229 0.3610 0.0740</td></tr><tr><td colspan=\"8\">(b) pvalues for the hypothesis that the results are not better than the performance of the respective best conventional parser</td></tr><tr><td>(see Table 1)</td><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"text": "pvalues (in %) for Yeh's randomized permutation test on performance differences between the uninform ative word embeddings and the two baselines. Values below the significance threshold of 5 % are marked in italics\u037e values below the stricter threshold of 0.25 % are additionally marked in bold. Values for the combination of all three corpora were computed on a subset of 461 sentences so that pvalues are comparable.",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"type_str": "table",
"html": null,
"text": "Total number of tokens as well as sentence count and average, median and standard deviation of the number of tokens per sentence in our test sets",
"content": "<table><tr><td>text</td><td>traditional Parser UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS normal empty zero cube ccube gauss sphere</td></tr><tr><td>dystopias</td><td>Mate 0.</td></tr></table>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"text": "attachment accuracies for the uninformative word embeddings (like Tab. 1), ignoring punctuation",
"content": "<table><tr><td>text</td><td>traditional Parser UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS normal empty zero cube ccube gauss sphere</td></tr><tr><td>dystopias</td><td>Mate</td></tr></table>"
},
"TABREF8": {
"num": null,
"type_str": "table",
"html": null,
"text": "attachment accuracies for the uninformative POS embeddings (like Tab. 2), ignoring punctuation",
"content": "<table><tr><td>text</td><td>traditional Parser UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS normal dystopias 19th century webcrawling total</td></tr><tr><td>dystopias</td><td>Mate 0.88 0.82 0.92 0.89 0.91 0.88 0.91 0.88 0.91 0.88 0.91 0.88</td></tr><tr><td colspan=\"2\">19th century Mate 0.85 0.80 0.89 0.86 0.89 0.85 0.89 0.85 0.89 0.85 0.89 0.85</td></tr><tr><td colspan=\"2\">webcrawling Malt 0.85 0.80 0.90 0.87 0.89 0.86 0.89 0.86 0.89 0.86 0.89 0.86</td></tr><tr><td>all three</td><td>Mate 0.86 0.80 0.90 0.87 0.90 0.86 0.90 0.86 0.90 0.86 0.90 0.86</td></tr></table>"
},
"TABREF9": {
"num": null,
"type_str": "table",
"html": null,
"text": "attachment accuracies for the domainspecific word embeddings (like Tab. 3), ignoring punctuation 29.1617 3.3620 49.0645 46.4145 43.1016 36.6626 47.5075 40.9466 webcrawling 0.3660 7.1739 0.3280 5.7079 26.5617 25.0447 44.1776 45.7915 42.6606 35.2796 45.0225 47.3175",
"content": "<table><tr><td>text</td><td/><td>empty UAS LAS UAS LAS zero</td><td>cube UAS</td><td>LAS</td><td>ccube UAS LAS</td><td>gauss UAS LAS</td><td>sphere UAS LAS</td></tr><tr><td>dystopias</td><td colspan=\"7\">0.0050 0.0010 0.0200 0.0010 6.5989 0.0670 27.6917 34.9537 20.3168 14.8269 25.7857 21.0308</td></tr><tr><td colspan=\"8\">19th century 0.9970 0.1160 2.4010 0.1450 all three 1.2110 0.7660 1.8830 0.7500 25.9947 8.0109 46.2675 48.6785 39.1796 33.5217 42.4446 39.6556</td></tr><tr><td/><td/><td colspan=\"5\">(a) pvalues for the hypothesis that the results are not worse than Sticker's performance</td></tr><tr><td>text</td><td/><td colspan=\"6\">empty UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS zero cube ccube gauss sphere</td></tr><tr><td>dystopias</td><td/><td colspan=\"6\">20.1508 0.8100 8.9059 0.8490 0.0380 0.0010 0.0010 0.0010 0.0020 0.0010 0.0010 0.0010</td></tr><tr><td colspan=\"2\">19th century</td><td colspan=\"6\">1.0150 0.0550 0.4340 0.0460 0.0030 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010</td></tr><tr><td colspan=\"2\">webcrawling</td><td colspan=\"6\">0.2430 0.0010 0.3280 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010</td></tr><tr><td>all three</td><td/><td>4.1960 0.1660 2.5450 0</td><td/><td/><td/><td/></tr></table>"
},
"TABREF10": {
"num": null,
"type_str": "table",
"html": null,
"text": "pvalues (in %) for Yeh's randomized permutation test on performance differences between the uninform ative POS embeddings and the two baselines. Values below the significance threshold of 5 % are marked in italics\u037e values below the stricter threshold of 0.25 % are additionally marked in bold. Values for the combination of all three corpora were computed on a subset of 461 sentences so that pvalues are comparable..1267 20.1548 36.5816 33.2397 34.4367 14.9089 45.3085 26.7537 webcrawling 15.7938 17.4818 27.3167 31.9647 23.1158 21.8738 11.2789 12.4249 all three 25.5267 22.8208 33.2327 31.5717 31.0637 20.3748 30.1267 21.8918 (a) pvalues for the hypothesis that the results are not worse than Sticker's performance",
"content": "<table><tr><td>text</td><td>dystopias UAS LAS</td><td>19th century UAS LAS</td><td>webcrawling UAS LAS</td><td>total UAS LAS</td></tr><tr><td>dystopias</td><td colspan=\"4\">15.4958 12.1299 19.7578 13.7949 18.7668 6.8609 22.4608 10.3269</td></tr><tr><td>text</td><td colspan=\"4\">dystopias UAS LAS UAS LAS UAS LAS UAS LAS 19th century webcrawling total</td></tr><tr><td>dystopias</td><td/><td/><td/><td/></tr></table>"
},
"TABREF11": {
"num": null,
"type_str": "table",
"html": null,
"text": "Precision and recall for selected labels when parsing with the uninformative POS embeddings. The 'gold count' column gives the number of occurrences of the label in our test data. Values differing by more than 10 per centage points from the baseline are marked in bold.",
"content": "<table><tr><td>label</td><td>gold normal dystopias 19th century webcrawling count P R P R P R P R</td><td>total</td></tr><tr><td>APP</td><td colspan=\"2\">704 0.75 0.88 0.73 0.88 0.74 0.89 0.73 0.88 0.73 0.88</td></tr><tr><td>GMOD</td><td colspan=\"2\">384 0.95 0.96 0.94 0.93 0.94 0.94 0.93 0.94 0.94 0.92</td></tr><tr><td>KOM</td><td colspan=\"2\">110 0.89 0.72 0.88 0.69 0.90 0.71 0.88 0.71 0.88 0.71</td></tr><tr><td>NEB</td><td colspan=\"2\">224 0.84 0.83 0.86 0.82 0.88 0.80 0.88 0.82 0.81 0.81</td></tr><tr><td>OBJA</td><td colspan=\"2\">928 0.88 0.90 0.87 0.88 0.86 0.89 0.85 0.87 0.85 0.88</td></tr><tr><td>OBJD</td><td colspan=\"2\">163 0.78 0.77 0.70 0.79 0.75 0.74 0.67 0.72 0.72 0.78</td></tr><tr><td>OBJI</td><td colspan=\"2\">109 0.72 0.82 0.69 0.82 0.69 0.82 0.74 0.81 0.72 0.82</td></tr><tr><td>OBJP</td><td colspan=\"2\">114 0.51 0.28 0.46 0.25 0.46 0.28 0.49 0.29 0.41 0.27</td></tr><tr><td>PRED</td><td colspan=\"2\">277 0.83 0.85 0.81 0.81 0.82 0.83 0.81 0.81 0.81 0.79</td></tr><tr><td>ROOT</td><td colspan=\"2\">3466 1.00 0.99 1.00 0.99 1.00 0.99 1.00 0.99 1.00 0.99</td></tr><tr><td>S</td><td colspan=\"2\">1726 0.91 0.86 0.91 0.86 0.91 0.86 0.91 0.86 0.91 0.85</td></tr></table>"
}
}
}
}