Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:41:10.865063Z"
},
"title": "Latent Anaphora Resolution for Cross-Lingual Pronoun Prediction",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "751 26",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "751 26",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {
"postBox": "Box 635",
"postCode": "751 26",
"settlement": "Uppsala",
"country": "Sweden"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper addresses the task of predicting the correct French translations of third-person subject pronouns in English discourse, a problem that is relevant as a prerequisite for machine translation and that requires anaphora resolution. We present an approach based on neural networks that models anaphoric links as latent variables and show that its performance is competitive with that of a system with separate anaphora resolution while not requiring any coreference-annotated training data. This demonstrates that the information contained in parallel bitexts can successfully be used to acquire knowledge about pronominal anaphora in an unsupervised way.",
"pdf_parse": {
"paper_id": "D13-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper addresses the task of predicting the correct French translations of third-person subject pronouns in English discourse, a problem that is relevant as a prerequisite for machine translation and that requires anaphora resolution. We present an approach based on neural networks that models anaphoric links as latent variables and show that its performance is competitive with that of a system with separate anaphora resolution while not requiring any coreference-annotated training data. This demonstrates that the information contained in parallel bitexts can successfully be used to acquire knowledge about pronominal anaphora in an unsupervised way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When texts are translated from one language into another, the translation reconstructs the meaning or function of the source text with the means of the target language. Generally, this has the effect that the entities occurring in the translation and their mutual relations will display similar patterns as the entities in the source text. In particular, coreference patterns tend to be very similar in translations of a text, and this fact has been exploited with good results to project coreference annotations from one language into another by using word alignments (Postolache et al., 2006; Rahman and Ng, 2012) .",
"cite_spans": [
{
"start": 569,
"end": 594,
"text": "(Postolache et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 595,
"end": 615,
"text": "Rahman and Ng, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "On the other hand, what is true in general need not be true for all types of linguistic elements. For instance, a substantial percentage of the English thirdperson subject pronouns he, she, it and they does not get realised as pronouns in French translations (Hardmeier, 2012) . Moreover, it has been recognised by various authors in the statistical machine translation (SMT) community (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Guillou, 2012 ) that pronoun translation is a difficult problem because, even when a pronoun does get translated as a pronoun, it may require choosing the correct word form based on agreement features that are not easily predictable from the source text.",
"cite_spans": [
{
"start": 259,
"end": 276,
"text": "(Hardmeier, 2012)",
"ref_id": "BIBREF9"
},
{
"start": 386,
"end": 413,
"text": "(Le Nagard and Koehn, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 414,
"end": 443,
"text": "Hardmeier and Federico, 2010;",
"ref_id": null
},
{
"start": 444,
"end": 457,
"text": "Guillou, 2012",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "The work presented in this paper investigates the problem of cross-lingual pronoun prediction for English-French. Given an English pronoun and its discourse context as well as a French translation of the same discourse and word alignments between the two languages, we attempt to predict the French word aligned to the English pronoun. As far as we know, this task has not been addressed in the literature before. In our opinion, it is interesting for several reasons. By studying pronoun prediction as a task in its own right, we hope to contribute towards a better understanding of pronoun translation with a longterm view to improving the performance of SMT systems. Moreover, we believe that this task can lead to interesting insights about anaphora resolution in a multi-lingual context. In particular, we show in this paper that the pronoun prediction task makes it possible to model the resolution of pronominal anaphora as a latent variable and opens up a way to solve a task relying on anaphora resolution without using any data annotated for anaphora. This is what we consider the main contribution of our present work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "We start by modelling cross-lingual pronoun prediction as an independent machine learning task after doing anaphora resolution in the source language (English) using the BART software (Broscheit et al., 2010) . We show that it is difficult to achieve satisfactory performance with standard maximum-The latest version released in March is equipped with ... It is sold at ... La derni\u00e8re version lanc\u00e9e en mars est dot\u00e9e de ... \u2022 est vendue ... entropy classifiers especially for low-frequency pronouns such as the French feminine plural pronoun elles. We propose a neural network classifier that achieves better precision and recall and manages to make reasonable predictions for all pronoun categories in many cases.",
"cite_spans": [
{
"start": 184,
"end": 208,
"text": "(Broscheit et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "We then go on to extend our neural network architecture to include anaphoric links as latent variables. We demonstrate that our classifier, now with its own source language anaphora resolver, can be trained successfully with backpropagation. In this setup, we no longer use the machine learning component included in the external coreference resolution system (BART) to predict anaphoric links. Anaphora resolution is done by our neural network classifier and requires only some quantity of word-aligned parallel data for training, completely obviating the need for a coreference-annotated training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "The overall setup of the classification task we address in this paper is shown in Figure 1 . We are given an English discourse containing a pronoun along with its French translation and word alignments between the two languages, which in our case were computed automatically using a standard SMT pipeline with GIZA++ (Och and Ney, 2003) . We focus on the four English third-person subject pronouns he, she, it and they. The output of the classifier is a multinomial distribution over six classes: the four French subject pronouns il, elle, ils and elles, corresponding to masculine and feminine singular and plural, respectively; the impersonal pronoun ce/c', which occurs in some very frequent constructions such as c'est (it is); and a sixth class OTHER, which indicates that none of these pronouns was used. In general, a pronoun may be aligned to multiple words; in this case, a training example is counted as a positive example for a class if the target word occurs among the words aligned to the pronoun, irrespective of the presence of other This task setup resembles the problem that an SMT system would have to solve to make informed choices when translating pronouns, an aspect of translation neglected by most existing SMT systems. An important difference between the SMT setup and our own classifiers is that we use context from humanmade translations for prediction. This potentially makes the task both easier and more difficult; easier, because the context can be relied on to be correctly translated, and more difficult, because human translators frequently create less literal translations than an SMT system would. Integrating pronoun prediction into the translation process would require significant changes to the standard SMT decoding setup in order to take long-range dependencies in the target language into account, which is why we do not address this issue in our current work.",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "2"
},
{
"text": "In all the experiments presented in this paper, we used features from two different sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "2"
},
{
"text": "-Anaphora context features describe the source language pronoun and its immediate context consisting of three words to its left and three words to its right. (Collins, 1999) . The different handling of anaphora context features and antecedent features is due to the fact that we always consider a constant number of context words on the source side, whereas the number of word vectors to be considered depends on the number of antecedent candidates and on the number of target words aligned to each antecedent.",
"cite_spans": [
{
"start": 158,
"end": 173,
"text": "(Collins, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "2"
},
{
"text": "The encoding of the antecedent features is illustrated in Figure 2 for a training example with two antecedent candidates translated to elle and la version, respectively. The target words are represented as one-hot vectors with the dimensionality of the target language vocabulary. These vectors are then averaged to yield a single vector per antecedent candidate. Finally, the vectors of all candidates for a given training example are weighted by the probabilities assigned to them by the anaphora resolver (p 1 and p 2 ) and summed to yield a single vector per training example.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "2"
},
{
"text": "We run experiments with two different test sets. The TED data set consists of around 2.6 million tokens of lecture subtitles released in the WIT 3 corpus (Cettolo et al., 2012). The WIT 3 training data yields 71,052 examples, which were randomly partitioned into a training set of 63,228 examples and a test set of 7,824 examples. The official WIT 3 development and test sets were not used in our experiments. The news-commentary data set is version 6 of the parallel news-commentary corpus released as a part of the WMT 2011 training data 1 . It contains around 2.8 million tokens of news text and yields 31,017 data points, which were randomly split into 27,900 training examples and 3,117 test instances. The distribution of the classes in the two training sets is shown in Table 1 . One thing to note is the dominance of the OTHER class, which pools together such different phenomena as translations with other pronouns not in our list (e. g., celui-ci) and translations with full noun phrases instead of pronouns. Splitting this group into more meaningful subcategories is not straightforward and must be left to future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 777,
"end": 784,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Sets and External Tools",
"sec_num": "3"
},
{
"text": "The feature setup of all our classifiers requires the detection of potential antecedents and the extraction of features pairing anaphoric pronouns with antecedent candidates. Some of our experiments also rely on an external anaphora resolution component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets and External Tools",
"sec_num": "3"
},
{
"text": "We use the open-source anaphora resolver BART to generate this information. BART (Broscheit et al., 2010) is an anaphora resolution toolkit consisting of a markable detection and feature extraction pipeline based on a variety of standard natural language processing (NLP) tools and a machine learning component to predict coreference links including both pronominal anaphora and noun-noun coreference. In our experiments, we always use BART's markable detection and feature extraction machinery. Markable detection is based on the identification of noun phrases in constituency parses generated with the Stanford parser (Klein and Manning, 2003) . The set of features extracted by BART is an extension of the widely used mention-pair anaphora resolution feature set by Soon et al. 2001(see below, Section 6).",
"cite_spans": [
{
"start": 81,
"end": 105,
"text": "(Broscheit et al., 2010)",
"ref_id": "BIBREF1"
},
{
"start": 620,
"end": 645,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets and External Tools",
"sec_num": "3"
},
{
"text": "In the experiments of the next two sections, we also use BART to predict anaphoric links for pronouns. The model used with BART is a maximum entropy ranker trained on the ACE02-npaper corpus (LDC2003T11). In order to obtain a probability distribution over antecedent candidates rather than onebest predictions or coreference sets, we modified the ranking component with which BART resolves pronouns to normalise and output the scores assigned by the ranker to all candidates instead of picking the highest-scoring candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets and External Tools",
"sec_num": "3"
},
{
"text": "In order to create a simple, but reasonable baseline for our task, we trained a maximum entropy (ME) classifier with the MegaM software package 2 using the features described in the previous section and the anaphora links found by BART. Results are shown in Table 2 . The baseline results show an overall higher accuracy for the TED data than for the newscommentary data. While the precision is above 50 % in all categories and considerably higher in some, recall varies widely.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Baseline Classifiers",
"sec_num": "4"
},
{
"text": "The pronoun elles is particularly interesting. This is the feminine plural of the personal pronoun, and it usually corresponds to the English pronoun they, which is not marked for gender. In French, elles is a marked choice which is only used if the antecedent exclusively refers to females or feminine-gendered objects. The presence of a single item with masculine grammatical gender in the antecedent will trigger the use of the masculine plural pronoun ils instead. This distinction cannot be predicted from the English source pronoun or its context; making correct predictions requires knowledge about the antecedent of the pronoun. Moreover, elles is a low-frequency pronoun. There are only 1,909 occurrences of this pro-noun in the TED training data, and 1,077 in the newscommentary training set. Because of these special properties of the feminine plural class, we argue that the performance of a classifier on elles is a good indicator of how well it can represent relevant knowledge about pronominal anaphora as opposed to overfitting to source contexts or acting on prior assumptions about class frequencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Classifiers",
"sec_num": "4"
},
{
"text": "In accordance with the general linguistic preference for ils, the classifier tends to predict ils much more often than elles when encountering an English plural pronoun. This is reflected in the fact that elles has much lower recall than ils. Clearly, the classifier achieves a good part of its accuracy by making majority choices without exploiting deeper knowledge about the antecedents of pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Classifiers",
"sec_num": "4"
},
{
"text": "An additional experiment with a subset of 27,900 training examples from the TED data confirms that the difference between TED and news commentaries is not just an effect of training data size, but that TED data is genuinely easier to predict than news commentaries. In the reduced data TED condition, the classifier achieves an accuracy of 0.673. Precision and recall of all classifiers are much closer to the Figure 3 : Neural network for pronoun prediction large-data TED condition than to the news commentary experiments, except for elles, where we obtain an F-score of 0.072 (P 0.818, R 0.038), indicating that small training data size is a serious problem for this low-frequency class.",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 418,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Classifiers",
"sec_num": "4"
},
{
"text": "E P R1 L1 R2 L2 R3 L3 p 3 p 2 p 1 3 2 1 H S A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Classifiers",
"sec_num": "4"
},
{
"text": "In the previous section, we saw that a simple multiclass maximum entropy classifier, while making correct predictions for much of the data set, has a significant bias towards making majority class decisions, relying more on prior assumptions about the frequency distribution of the classes than on antecedent features when handling examples of less frequent classes. In order to create a system that can be trained to rely more explicitly on antecedent information, we created a neural network classifier for our task. The introduction of a hidden layer should enable the classifier to learn abstract concepts such as gender and number that are useful across multiple output categories, so that the performance of sparsely represented classes can benefit from the training examples of the more frequent classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "The overall structure of the network is shown in Figure 3 . As inputs, the network takes the same features that were available to the baseline ME classifier, based on the source pronoun (P) with three words of context to its left (L1 to L3) and three words to its right (R1 to R3) as well as the words aligned to the syntactic head words of all possible antecedent candidates as found by BART (A). All words are encoded as one-hot vectors whose dimensionality is equal to the vocabulary size. If multiple words are aligned to the syntactic head of an antecedent candidate, their word vectors are averaged with uniform weights. The resulting vectors for each antecedent are then averaged with weights defined by the posterior distribution of the anaphora resolver in BART (p 1 to p 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "The network has two hidden layers. The first layer (E) maps the input word vectors to a low-dimensional representation. In this layer, the embedding weights for all the source language vectors (the pronoun and its 6 context words) are tied, so if two words are the same, they are mapped to the same lowerdimensional embedding irrespective of their position relative to the pronoun. The embedding of the antecedent word vectors is independent, as these word vectors represent target language words. The entire embedding layer is then mapped to another hidden layer (H), which is in turn connected to a softmax output layer (S) with 6 outputs representing the classes ce, elle, elles, il, ils and OTHER. The non-linearity of both hidden layers is the logistic sigmoid function, f (x) = 1/(1 + e \u2212x ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "In all experiments reported in this paper, the dimensionality of the source and target language word embeddings is 20, resulting in a total embedding layer size of 160, and the size of the last hidden layer is equal to 50. These sizes are fairly small. In experiments with larger layer sizes, we were able to obtain similar, but no better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "The neural network is trained with mini-batch stochastic gradient descent with backpropagated gradients using the RMSPROP algorithm with crossentropy as the objective function. 3 In contrast to standard gradient descent, RMSPROP normalises the magnitude of the gradient components by dividing them by a root-mean-square moving average. We found this led to faster convergence. Other features of our training algorithm include the use of momentum to even out gradient oscillations, adaptive learning rates for each weight as well as adaptation of the global learning rate as a function of current training progress. The network is regularised with an 2 weight penalty. Good settings of the initial learning rate and the weight cost parameter (both around 0.001 in most experiments) were found by manual experimentation. Generally, we train our networks for 300 epochs, compute the validation error on a held-out set of some 10 % of the training data after each epoch and use the model that achieved the lowest validation error for testing.",
"cite_spans": [
{
"start": 177,
"end": 178,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "Since the source context features are very informative and it is comparatively more difficult to learn from the antecedents, the network sometimes had a tendency to overfit to the source features and disregard antecedent information. We found that this problem can be solved effectively by presenting a part of the training without any source features, forcing the network to learn from the information contained in the antecedents. In all experiments in this paper, we zero out all source features (input layers P, L1 to L3 and R1 to R3) with a probability of 50 % in each training example. At test time, no information is zeroed out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "Classification results with this network are shown in Table 3 . We note that the accuracy has increased slightly for the TED test set and remains exactly the same for the news commentary corpus. However, a closer look on the results for individual classes reveals that the neural network makes better predictions for almost all classes. In terms of F-score, the only class that becomes slightly worse is the OTHER class for the news commentary corpus because of lower recall, indicating that the neural network classifier is less biased towards using the uninformative OTHER category. Recall for elle and elles increases considerably, but especially for elles it is still quite low. The increase in recall comes with some loss in precision, but the net effect on F-score is clearly positive.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Neural Network Classifier",
"sec_num": "5"
},
{
"text": "Considering Figure 1 again, we note that the bilingual setting of our classification task adds some information not available to the monolingual anaphora resolver that can be helpful when determining the correct antecedent for a given pronoun. Knowing the gender of the translation of a pronoun limits the set of possible antecedents to those whose translation is morphologically compatible with the target language pronoun. We can exploit this fact to learn how to resolve anaphoric pronouns without requiring data with manually annotated anaphoric links.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "To achieve this, we extend our neural network with a component to predict the probability of each antecedent candidate to be the correct antecedent (Figure 4) . The extended network is identical to the previous version except for the upper left part dealing with anaphoric link features. The only difference between the two networks is the fact that anaphora resolution is now performed by a part of our neural network itself instead of being done by an external module and provided to the classifier as an input.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 158,
"text": "(Figure 4)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "In this setup, we still use some parts of the BART toolkit to extract markables and compute features. However, we do not make use of the machine learning component in BART that makes the actual predictions. Since this is the only component trained on coreference-annotated data in a typical BART configuration, no coreference annotations are used anywhere in our system even though we continue to rely on the external anaphora resolver for preprocessing to avoid implementing our own markable and feature extractors and to make comparison easier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "For each candidate markable identified by BART's preprocessing pipeline, the anaphora resolution model receives as input a link feature vector (T) describing relevant aspects of the antecedent candidateanaphora pair. This feature vector is generated by the feature extraction machinery in BART and includes a standard feature set for coreference resolution partially based on work by Soon et al. (2001) . We use the following feature extractors in BART, each of Our baseline set of features was borrowed wholesale from a working coreference system and includes some features that are not relevant to the task at hand, e. g., features indicating that the anaphora is a pronoun, is not a named entity, etc. After removing all features that assume constant values in the training set when resolving antecedents for the set of pronouns we consider, we are left with a basic set of 37 anaphoric link features that are fed as inputs to our network. These features are exactly the same as those available to the anaphora resolution classifier in the BART system used in the previous section.",
"cite_spans": [
{
"start": 384,
"end": 402,
"text": "Soon et al. (2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "Each training example for our network can have an arbitrary number of antecedent candidates, each of which is described by an antecedent word vector (A) and by an anaphoric link vector (T). The anaphoric link features are first mapped to a regular hidden layer with logistic sigmoid units (U). The activations of the hidden units are then mapped to a single value, which functions as an element in a softmax layer over all antecedent candidates (V). This softmax layer assigns a probability to each antecedent candidate, which we then use to compute a weighted average over the antecedent word vector, replacing the probabilities p i in Figures 2 and 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 637,
"end": 652,
"text": "Figures 2 and 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "At training time, the network's anaphora resolution component is trained in exactly the same way as the rest of the network. The error signal from the embedding layer is backpropagated both to the weight matrix defining the antecedent word embedding and to the anaphora resolution subnetwork. Note that the number of weights in the network is the same for all training examples even though the number of antecedent candidates varies because all weights related to antecedent word features and anaphoric link features are shared between all antecedent candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "One slightly uncommon feature of our neural network is that it contains an internal softmax layer to generate normalised probabilities over all possible antecedent candidates. Moreover, weights are shared between all antecedent candidates, so the inputs of our internal softmax layer share dependencies on the same weight variables. When computing derivatives with backpropagation, these shared dependencies must be taken into account. In particular, the outputs y i of the antecedent resolution layer are the result of a softmax applied to functions of some shared variables q:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = exp f i (q) \u2211 k exp f k (q)",
"eq_num": "(1)"
}
],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "The derivatives of any y i with respect to q, which can be any of the weights in the anaphora resolution subnetwork, have dependencies on the derivatives of the other softmax inputs with respect to q:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202 y i \u2202 q = y i \u2202 f i (q) \u2202 q \u2212 \u2211 k y k \u2202 f k (q) \u2202 q",
"eq_num": "(2)"
}
],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "This makes the implementation of backpropagation for this part of the network somewhat more complicated, but in the case of our networks, it has no major impact on training time. Experimental results for this network are shown in Table 4 . Compared with Table 3 , we note that the overall accuracy is only very slightly lower for TED, and for the news commentaries it is actually better. When it comes to F-scores, the performance for elles improves by a small amount, while the effect on the other classes is a bit more mixed. Even where it gets worse, the differences are not dramatic considering that we eliminated a very knowledge-rich resource from the training process. This demonstrates that it is possible, in our classification task, to obtain good results without using any data manually annotated for anaphora and to rely entirely on unsupervised latent anaphora resolution.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 254,
"end": 261,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Latent Anaphora Resolution",
"sec_num": "6"
},
{
"text": "The results presented in the preceding section represent a clear improvement over the ME classifiers in Table 2 , even though the overall accuracy increased only slightly. Not only does our neural network classifier achieve better results on the classification task at hand without requiring an anaphora resolution classifier trained on manually annotated data, but it performs clearly better for the feminine categories that reflect minority choices requiring knowledge about the antecedents. Nevertheless, the performance is still not entirely satisfactory.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Further Improvements",
"sec_num": "7"
},
{
"text": "By subjecting the output of our classifier on a development set to a manual error analysis, we found that a fairly large number of errors belong to two error types: On the one hand, the preprocessing pipeline used to identify antecedent candidates does not always include the correct antecedent in the set presented to the neural network. Whenever this occurs, it is obvious that the classifier cannot possibly find the correct antecedent. Out of 76 examples of the category elles that had been mistakenly predicted as ils, we found that 43 suffered from this problem. In other classes, the problem seems to be somewhat less common, but it still exists. On the other hand, in many cases (23 out of 76 for the category mentioned before) the anaphora resolution subnetwork does identify an antecedent manually recognised to belong to the right gender/number group, but still predicts an incorrect pronoun. This may indicate that the network has difficulties learning a correct gender/number representation for all words in the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Improvements",
"sec_num": "7"
},
{
"text": "The pipeline we use to extract potential antecedent candidates is borrowed from the BART anaphora resolution toolkit. BART uses a syntactic parser to identify noun phrases as markables. When extracting antecedent candidates for coreference prediction, it starts by considering a window consisting of the sentence in which the anaphoric pronoun is located and the two immediately preceding sentences. Markables in this window are checked for morphological compatibility in terms of gender and number with the anaphoric pronoun, and only compatible markables are extracted as antecedent candidates. If no compatible markables are found in the initial window, the window is successively enlarged one sentence at a time until at least one suitable markable is found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxing Markable Extraction",
"sec_num": "7.1"
},
{
"text": "Our error analysis shows that this procedure misses some relevant markables both because the initial two-sentence extraction window is too small and because the morphological compatibility check incorrectly filters away some markables that should have been considered as candidates. By contrast, the extraction procedure does extract quite a number of first and second person noun phrases (I, we, you and their oblique forms) in the TED talks which are extremely unlikely to be the antecedent of a later occurrence of he, she, it or they. As a first step, we therefore adjust the extraction criteria to our task by increasing the initial extraction window to five sentences, excluding first and second person markables and removing the morphological compatibility requirement. The compatibility check is still used to control expansion of the extraction window, but it is no longer applied to filter the extracted markables. This increases the accuracy to 0.701 for TED and 0.602 for the news commentaries, while the performance for elles improves to F-scores of 0.531 (TED; P 0.690, R 0.432) and 0.304 (News commentaries; P 0.444, R 0.231), respectively. Note that these and all the following results are not directly comparable to the ME baseline results in Table 2 , since they include modifications and improvements to the training data extraction procedure that might possibly lead to benefits in the ME setting as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 1260,
"end": 1267,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Relaxing Markable Extraction",
"sec_num": "7.1"
},
{
"text": "In order to make it easier for the classifier to identify the gender and number properties of infrequent words, we extend the word vectors with features indicating possible morphological features for each word. In early experiments with ME classifiers, we found that our attempts to do proper gender and number tagging in French text did not improve classification performance noticeably, presumably because the annotation was too noisy. In more recent experiments, we just add features indicating all possible morphological interpretations of each word, rather than trying to disambiguate them. To do this, we look up the morphological annotations of the French words in the Lefff dictionary (Sagot et al., 2006) and intro-duce a set of new binary features to indicate whether a particular reading of a word occurs in that dictionary. These features are then added to the one-hot representation of the antecedent words. Doing so improves the classifier accuracy to 0.711 (TED) and 0.604 (News commentaries), while the F-scores for elles reach 0.589 (TED; P 0.649, R 0.539) and 0.500 (News commentaries; P 0.545, R 0.462), respectively.",
"cite_spans": [
{
"start": 693,
"end": 713,
"text": "(Sagot et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Lexicon Knowledge",
"sec_num": "7.2"
},
{
"text": "Even though the modified antecedent candidate extraction with its larger context window and without the morphological filter results in better performance on both test sets, additional error analysis reveals that the classifiers has greater problems identifying the correct markable in this setting. One reason for this may be that the baseline anaphoric link feature set described above (Section 6) only includes two very rough binary distance features which indicate whether or not the anaphora and the antecedent candidate occur in the same or in immediately adjacent sentences. With the larger context window, this may be too unspecific. In our final experiment, we therefore enable some additional features which are available in BART, but disabled in the baseline system: -Distance in number of markables -Distance in number of sentences -Sentence distance, log-transformed -Distance in number of words -Part of speech of head word Most of these encode the distance between the anaphora and the antecedent candidate in more precise ways. Complete results for this final system are presented in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1100,
"end": 1107,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "More Anaphoric Link Features",
"sec_num": "7.3"
},
{
"text": "Including these additional features leads to another slight increase in accuracy for both corpora, with similar or increased classifier F-scores for most classes except elle in the news commentary condition. In particular, we should like to point out the performance of our benchmark classifier for elles, which suffered from extremely low recall in the first classifiers and approaches the performance of the other classes, with nearly balanced precision and recall, in this final system. Since elles is a low-frequency class and cannot be reliably predicted using source context alone, we interpret this as evidence that our final neural network classifier has incorporated some relevant knowledge about pronominal anaphora that the baseline ME classifier and earlier versions of our network have no access to. This is particularly remarkable because no data manually annotated for coreference was used for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "More Anaphoric Link Features",
"sec_num": "7.3"
},
{
"text": "Even though it was recognised years ago that the information contained in parallel corpora may provide valuable information for the improvement of anaphora resolution systems, there have not been many attempts to cash in on this insight. Mitkov and Barbu (2003) exploit parallel data in English and French to improve pronominal anaphora resolution by combining anaphora resolvers for the individual languages with handwritten rules to resolve conflicts between the output of the language-specific resolvers. Veselovsk\u00e1 et al. (2012) apply a similar strategy to English-Czech data to resolve different uses of the pronoun it. Other work has used word alignments to project coreference annotations from one language to another with a view to training anaphora resolvers in the target language (Postolache et al., 2006; de Souza and Or\u0203san, 2011) . Rahman and Ng (2012) instead use machine translation to translate their test data into a language for which they have an anaphora resolver and then project the annotations back to the original language. Completely unsupervised monolingual anaphora resolution has been approached using, e. g., Markov logic (Poon and Domingos, 2008) and the Expectation-Maximisation algorithm (Cherry and Bergsma, 2005; Charniak and Elsner, 2009) . To the best of our knowledge, the direct application of machine learning techniques to parallel data in a task related to anaphora resolution is novel in our work.",
"cite_spans": [
{
"start": 238,
"end": 261,
"text": "Mitkov and Barbu (2003)",
"ref_id": "BIBREF13"
},
{
"start": 791,
"end": 816,
"text": "(Postolache et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 817,
"end": 843,
"text": "de Souza and Or\u0203san, 2011)",
"ref_id": "BIBREF7"
},
{
"start": 846,
"end": 866,
"text": "Rahman and Ng (2012)",
"ref_id": "BIBREF17"
},
{
"start": 1152,
"end": 1177,
"text": "(Poon and Domingos, 2008)",
"ref_id": "BIBREF15"
},
{
"start": 1221,
"end": 1247,
"text": "(Cherry and Bergsma, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 1248,
"end": 1274,
"text": "Charniak and Elsner, 2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "Neural networks and deep learning techniques have recently gained some popularity in natural language processing. They have been applied to tasks such as language modelling (Bengio et al., 2003; Schwenk, 2007) , translation modelling in statistical machine translation (Le et al., 2012) , but also part-ofspeech tagging, chunking, named entity recognition and semantic role labelling (Collobert et al., 2011) . In tasks related to anaphora resolution, standard feedforward neural networks have been tested as a classifier in an anaphora resolution system (Stuckardt, 2007) , but the network design presented in our work is novel.",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 195,
"end": 209,
"text": "Schwenk, 2007)",
"ref_id": "BIBREF19"
},
{
"start": 269,
"end": 286,
"text": "(Le et al., 2012)",
"ref_id": "BIBREF11"
},
{
"start": 384,
"end": 408,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 555,
"end": 572,
"text": "(Stuckardt, 2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "In this paper, we have introduced cross-lingual pronoun prediction as an independent natural language processing task. Even though it is not an end-to-end task, pronoun prediction is interesting for several reasons. It is related to the problem of pronoun translation in SMT, a currently unsolved problem that has been addressed in a number of recent research publications (Le Nagard and Koehn, 2010; Hardmeier and Federico, 2010; Guillou, 2012) without reaching a major breakthrough. In this work, we have shown that pronoun prediction can be effectively modelled in a neural network architecture with relatively simple features. More importantly, we have demonstrated that the task can be exploited to train a classifier with a latent representation of anaphoric links. With parallel text as its only supervision this classifier achieves a level of performance that is similar to, if not better than, that of a classifier using a regular anaphora resolution system trained with manually annotated data.",
"cite_spans": [
{
"start": 373,
"end": 400,
"text": "(Le Nagard and Koehn, 2010;",
"ref_id": "BIBREF12"
},
{
"start": 401,
"end": 430,
"text": "Hardmeier and Federico, 2010;",
"ref_id": null
},
{
"start": 431,
"end": 445,
"text": "Guillou, 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "http://www.statmt.org/wmt11/translation-task. html (3 July 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.umiacs.umd.edu/~hal/megam/ (20 June 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our training procedure is greatly inspired by a series of online lectures held by Geoffrey Hinton in 2012 (https://www. coursera.org/course/neuralnets, 10 September 2013).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, 3:1137-1155.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BART: A multilingual anaphora resolution system",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Broscheit",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Kepa Joseba",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zanoli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluations (SemEval-2010)",
"volume": "",
"issue": "",
"pages": "15--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Broscheit, Massimo Poesio, Simone Paolo Ponzetto, Kepa Joseba Rodriguez, Lorenza Romano, Olga Uryupina, Yannick Versley, and Roberto Zanoli. 2010. BART: A multilingual anaphora resolution sys- tem. In Proceedings of the 5th International Work- shop on Semantic Evaluations (SemEval-2010), Upp- sala, Sweden, 15-16 July 2010.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WIT 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT 3 : Web inventory of transcribed and trans- lated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261-268, Trento, Italy.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "EM works for pronoun anaphora resolution",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "148--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Micha Elsner. 2009. EM works for pronoun anaphora resolution. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 148-156, Athens, Greece.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An Expectation Maximization approach to pronoun resolution",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and Shane Bergsma. 2005. An Expecta- tion Maximization approach to pronoun resolution. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 88- 95, Ann Arbor, Michigan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2461--2505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2461-2505.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Can projected chains in parallel corpora help coreference resolution?",
"authors": [
{
"first": "Souza",
"middle": [],
"last": "Jos\u00e9 De",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Or\u0203san",
"suffix": ""
}
],
"year": 2011,
"venue": "Anaphora Processing and Applications",
"volume": "7099",
"issue": "",
"pages": "59--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 de Souza and Constantin Or\u0203san. 2011. Can pro- jected chains in parallel corpora help coreference reso- lution? In Iris Hendrickx, Sobha Lalitha Devi, Ant\u00f3nio Branco, and Ruslan Mitkov, editors, Anaphora Process- ing and Applications, volume 7099 of Lecture Notes in Computer Science, pages 59-69. Springer, Berlin.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Christian Hardmeier and Marcello Federico. 2010. Modelling pronominal anaphora in statistical machine translation",
"authors": [
{
"first": "Liane",
"middle": [],
"last": "Guillou",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "283--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liane Guillou. 2012. Improving pronoun translation for statistical machine translation. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 1-10, Avignon, France. Christian Hardmeier and Marcello Federico. 2010. Mod- elling pronominal anaphora in statistical machine trans- lation. In Proceedings of the seventh International Workshop on Spoken Language Translation (IWSLT), pages 283-289, Paris, France.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discourse in statistical machine translation: A survey and a case study",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Hardmeier. 2012. Discourse in statistical ma- chine translation: A survey and a case study. Discours, 11.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo, Japan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Continuous space translation models with neural networks",
"authors": [
{
"first": "Hai-Son",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai-Son Le, Alexandre Allauzen, and Fran\u00e7ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies, pages 39-48, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Aiding pronoun translation with co-reference resolution",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Ronan Le Nagard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR",
"volume": "",
"issue": "",
"pages": "252--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Le Nagard and Philipp Koehn. 2010. Aiding pro- noun translation with co-reference resolution. In Pro- ceedings of the Joint Fifth Workshop on Statistical Ma- chine Translation and MetricsMATR, pages 252-261, Uppsala, Sweden.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using bilingual corpora to improve pronoun resolution",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
},
{
"first": "Catalina",
"middle": [],
"last": "Barbu",
"suffix": ""
}
],
"year": 2003,
"venue": "Languages in Contrast",
"volume": "4",
"issue": "2",
"pages": "201--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Mitkov and Catalina Barbu. 2003. Using bilingual corpora to improve pronoun resolution. Languages in Contrast, 4(2):201-211.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational linguistics",
"volume": "29",
"issue": "",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment models. Computational linguistics, 29:19-51.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Joint unsupervised coreference resolution with Markov Logic",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "650--659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Pedro Domingos. 2008. Joint un- supervised coreference resolution with Markov Logic. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 650- 659, Honolulu, Hawaii.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Transferring coreference chains through word alignment",
"authors": [
{
"first": "Oana",
"middle": [],
"last": "Postolache",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Cristea",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Or\u0203san",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th Conference on International Language Resources and Evaluation (LREC-2006)",
"volume": "",
"issue": "",
"pages": "889--892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oana Postolache, Dan Cristea, and Constantin Or\u0203san. 2006. Transferring coreference chains through word alignment. In Proceedings of the 5th Conference on International Language Resources and Evaluation (LREC-2006), pages 889-892, Genoa.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Translation-based projection for multilingual coreference resolution",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "720--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2012. Translation-based projection for multilingual coreference resolution. In Proceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 720- 730, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Lefff 2 syntactic lexicon for French: architecture, acquisition, use",
"authors": [
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Lionel",
"middle": [],
"last": "Cl\u00e9ment",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Villemonte De La Clergerie",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th Conference on International Language Resources and Evaluation (LREC-2006)",
"volume": "",
"issue": "",
"pages": "1348--1351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beno\u00eet Sagot, Lionel Cl\u00e9ment, \u00c9ric Villemonte de La Clergerie, and Pierre Boullier. 2006. The Lefff 2 syntactic lexicon for French: architecture, acquisition, use. In Proceedings of the 5th Conference on Inter- national Language Resources and Evaluation (LREC- 2006), pages 1348-1351, Genoa.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Continuous space language models",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Speech and Language",
"volume": "21",
"issue": "3",
"pages": "492--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk. 2007. Continuous space language mod- els. Computer Speech and Language, 21(3):492-518.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to corefer- ence resolution of noun phrases. Computational lin- guistics, 27(4):521-544.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Applying backpropagation networks to anaphor resolution",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Stuckardt",
"suffix": ""
}
],
"year": 2007,
"venue": "Anaphora: Analysis, Algorithms and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Stuckardt. 2007. Applying backpropagation net- works to anaphor resolution. In Ant\u00f3nio Branco, editor, Anaphora: Analysis, Algorithms and Applications. 6th",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using Czech-English parallel corpora in automatic identification of it",
"authors": [],
"year": 2012,
"venue": "Discourse Anaphora and Anaphor Resolution Colloquium, DAARC 2007, number 4410 in Lecture Notes in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "112--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Discourse Anaphora and Anaphor Resolution Collo- quium, DAARC 2007, number 4410 in Lecture Notes in Artificial Intelligence, pages 107-124, Berlin. Kate\u0159ina Veselovsk\u00e1, Ngu . y Giang Linh, and Michal Nov\u00e1k. 2012. Using Czech-English parallel corpora in automatic identification of it. In Proceedings of the 5th Workshop on Building and Using Comparable Corpora, pages 112-120, Istanbul, Turkey.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Task setup",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Antecedent feature aggregation aligned tokens.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Neural network with latent anaphora resolution which can generate multiple features: -Anaphora mention type -Gender match -Number match -String match -Alias feature (Soon et al., 2001) -Appositive position feature (Soon et al., 2001) -Semantic class (Soon et al., 2001) -Semantic class match -Binary distance feature -Antecedent is first mention in sentence",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Maximum entropy classifier results</td></tr><tr><td/><td>TED</td><td/><td/><td/><td colspan=\"2\">News commentary</td></tr><tr><td/><td colspan=\"2\">(Accuracy: 0.700)</td><td/><td/><td colspan=\"2\">(Accuracy: 0.576)</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td/><td>P</td><td>R</td><td>F</td></tr><tr><td>ce</td><td colspan=\"3\">0.634 0.747 0.686</td><td>ce</td><td colspan=\"2\">0.477 0.344 0.400</td></tr><tr><td>elle</td><td colspan=\"3\">0.756 0.617 0.679</td><td>elle</td><td colspan=\"2\">0.498 0.401 0.444</td></tr><tr><td>elles</td><td colspan=\"3\">0.679 0.319 0.434</td><td>elles</td><td colspan=\"2\">0.565 0.116 0.193</td></tr><tr><td>il</td><td colspan=\"3\">0.719 0.591 0.649</td><td>il</td><td colspan=\"2\">0.655 0.626 0.640</td></tr><tr><td>ils</td><td colspan=\"3\">0.663 0.940 0.778</td><td>ils</td><td colspan=\"2\">0.570 0.834 0.677</td></tr><tr><td colspan=\"4\">OTHER 0.743 0.678 0.709</td><td colspan=\"3\">OTHER 0.567 0.573 0.570</td></tr></table>"
},
"TABREF3": {
"text": "Neural network classifier with anaphoras resolved by BART",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"6\">: Neural network classifier with latent anaphora resolution</td></tr><tr><td/><td>TED</td><td/><td/><td/><td colspan=\"2\">News commentary</td></tr><tr><td/><td colspan=\"2\">(Accuracy: 0.713)</td><td/><td/><td colspan=\"2\">(Accuracy: 0.626)</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td/><td>P</td><td>R</td><td>F</td></tr><tr><td>ce</td><td colspan=\"3\">0.611 0.723 0.662</td><td>ce</td><td colspan=\"2\">0.492 0.324 0.391</td></tr><tr><td>elle</td><td colspan=\"3\">0.749 0.596 0.664</td><td>elle</td><td colspan=\"2\">0.526 0.439 0.478</td></tr><tr><td>elles</td><td colspan=\"3\">0.602 0.616 0.609</td><td>elles</td><td colspan=\"2\">0.547 0.558 0.552</td></tr><tr><td>il</td><td colspan=\"3\">0.733 0.638 0.682</td><td>il</td><td colspan=\"2\">0.599 0.757 0.669</td></tr><tr><td>ils</td><td colspan=\"3\">0.710 0.884 0.788</td><td>ils</td><td colspan=\"2\">0.671 0.878 0.761</td></tr><tr><td colspan=\"4\">OTHER 0.760 0.704 0.731</td><td colspan=\"3\">OTHER 0.681 0.526 0.594</td></tr></table>"
},
"TABREF6": {
"text": "Final classifier results",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}