|
{ |
|
"paper_id": "D16-1017", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:34:29.340517Z" |
|
}, |
|
"title": "Event participant modelling with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ottokar", |
|
"middle": [], |
|
"last": "Tilk", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tallinn University of Technology", |
|
"location": { |
|
"postCode": "12618", |
|
"settlement": "Tallinn", |
|
"country": "Estonia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"postCode": "66123", |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Asad", |
|
"middle": [], |
|
"last": "Sayeed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"postCode": "66123", |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"postCode": "66123", |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Saarland University", |
|
"location": { |
|
"postCode": "66123", |
|
"settlement": "Saarbr\u00fccken", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in, e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all. To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.", |
|
"pdf_parse": { |
|
"paper_id": "D16-1017", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "A common problem in cognitive modelling is lack of access to accurate broad-coverage models of event-level surprisal. As shown in, e.g., Bicknell et al. (2010), event-level knowledge does affect human expectations for verbal arguments. For example, the model should be able to predict that mechanics are likely to check tires, while journalists are more likely to check typos. Similarly, we would like to predict what locations are likely for playing football or playing flute in order to estimate the surprisal of actually-encountered locations. Furthermore, such a model can be used to provide a probability distribution over fillers for a thematic role which is not mentioned in the text at all. To this end, we train two neural network models (an incremental one and a non-incremental one) on large amounts of automatically rolelabelled text. Our models are probabilistic and can handle several roles at once, which also enables them to learn interactions between different role fillers. Evaluation shows a drastic improvement over current state-of-the-art systems on modelling human thematic fit judgements, and we demonstrate via a sentence similarity task that the system learns highly useful embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Our goals in this paper are to learn a representation of events and their thematic roles based on large quantities of automatically role-labelled text and to be able to calculate probability distributions over the possible role fillers of specific missing roles. In this sense, the task is closely related to work on selectional preference acquisition (Van de Cruys, 2014). We focus here on the roles agent, patient, location, time, manner and the predicate itself. The model we develop is trained to represent the eventrelevant context and hence systematically captures long-range dependencies. This has been previously shown to be beneficial also for more general language modelling tasks (e.g., Chelba and Jelinek, 1998; Tan et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 698, |
|
"end": 723, |
|
"text": "Chelba and Jelinek, 1998;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 741, |
|
"text": "Tan et al., 2012)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This type of modelling is potentially relevant to a wide range of tasks, for instance for performing thematic fit judgment tasks, detecting anomalous events (Dasigi and Hovy, 2014) , or predicting event structure that is not explicitly present in the text. The latter could be useful for inferring missing information in entailment tasks or improving identification of thematic roles outside the sentence containing the predicate. Potential applications also include predicate prediction based on arguments and roles, which has been noted to be relevant for simultaneous machine translation for a verb-final to a verb-medial source language (Grissom II et al., 2014) . Within cognitive modelling, our model could help to more accurately estimate semantic surprisal for broadcoverage texts, when used in combination with an incremental role labeller (e.g., Konstas and Keller, 2015) , or to provide surprisal estimates for content words as a control variable for psycholinguistic experimental materials.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 180, |
|
"text": "(Dasigi and Hovy, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 666, |
|
"text": "(Grissom II et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 881, |
|
"text": "Konstas and Keller, 2015)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we focus on the predictability of verbs and nouns, and we suggest that the predictability of these words depends to a large extent on the relationship of these words to other nouns and verbs, especially those connected via the same event. We choose a neural network (NN) model because we found that results from existing related models, e.g. Baroni and Lenci' s Distributional Memory, depend heavily on how exactly the distributional space is defined, while having no principled way of optimizing the space. A crucial advantage of a neural network-based approach is thus that the model can be trained to optimize the distributional representation for the task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 373, |
|
"text": "Baroni and Lenci'", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our model is trained specifically to predict missing semantic role-fillers based on the predicate and other available role-fillers of that predicate. The model can also predict the predicate based on the semantic roles and their fillers. In our model, there is no difference in how the semantic roles or the predicate are treated. Thus, when we refer here to roles, we usually mean both semantic roles and the predicate, unless otherwise explicitly stated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our model is compositional in that it has access to several role-fillers (including the verb) at the same time, and can thus represent interdependencies between participants of an event and predict from a combined representation. Consider, for example, the predicate serve, whose likely patients include e.g., drinks. If we had the agent robber, we would like to be able to predict a patient like sentence, in the sense of \"the robber will serve his sentence. . . \" This task is related to modelling thematic fit. In this paper, we evaluate our model on a variety of thematic fit rating datasets as well as on a sentence similarity dataset that tests for successful compositionality in our model's representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper makes the following contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We compare two novel NN models for generating a probability distribution over selectional preferences given one or more roles and fillers. \u2022 We show that our technique outperforms state of the art thematic fit models on many datasets. \u2022 We show that the embeddings thus obtained are effective in measuring sentence similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Neural networks have proven themselves to be very well suited for language modeling. By learning distributed representations of words (Bengio et al., 2003) , they are able to generalize to new contexts that were not observed word-by-word in the training corpus. They can also use a relatively large number of context words in order to make predictions about the upcoming word. In fact, the recurrent neural network (RNN) LM (Mikolov et al., 2010) does not explicitly fix the context size at all but is potentially able to compress the relevant information about the entire context in its recurrent layer. These are the properties that we would like to see in our role-filler prediction model as well. Neural networks have also been used for selectional preference acquisition, as in Van de Cruys (2014). His selectional preference model differs from our model in several aspects. First, unlike our model it is limited to a fixed number of inputs. Another difference is that his model uses separate embeddings for all input words, while ours enables partial parameter sharing. Finally and crucially for rolefiller prediction, selectional preference models score the inputs, while our model gives a probability distribution over all words for the queried target role.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 155, |
|
"text": "(Bengio et al., 2003)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 446, |
|
"text": "(Mikolov et al., 2010)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural networks", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "We discuss the components necessary for our model in more detail in section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural networks", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Our source of training data is the ukWaC corpus, which is part of the WaCky project, as well as the British National Corpus. The corpus consists of web pages crawled from the .uk web domain, containing approximately 138 million sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data source", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "These sentences were run through a semantic role labeller and head words were extracted as described in Sayeed et al. (2015) . The semantic role labeller used, SENNA (Collobert and Weston, 2007) , generates PropBank-style role labels. While PropBank argument positions (ARG0, ARG1, etc.) are primarily designed to be verb-specific, rather than directly representing \"classical\" thematic roles (agent, patient, etc.), in the majority of cases, ARG0 lines up with agent roles and ARG1 lines up with patient roles. PropBank-style roles have been used in other recent efforts in thematic fit modelling (e.g., Baroni et al., 2014; Vandekerckhove et al., 2009) , For processing purposes, the corpus was divided into 3500 segments. Fourteen segments (approx 500 thousand sentences) each were used for development and testing, and the rest were used for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 124, |
|
"text": "Sayeed et al. (2015)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 194, |
|
"text": "(Collobert and Weston, 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 625, |
|
"text": "Baroni et al., 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 654, |
|
"text": "Vandekerckhove et al., 2009)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data source", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to construct our incremental model and compare it to n-gram language models, we needed a precise mapping between the lemmatized argument words and their positions in the original sentence. This required aligning the SENNA tokenization and the original ukWaC tokenization used for Malt-Parser. Because of the heterogeneous nature of web data, this alignment was not always achievable-we skipped a small number of sentences in this case. In the development and testing portions of the data set, we filtered sentences containing predicates where there were multiple role-assignees with the same role for the same predicate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data source", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our model is a neural network with a single nonlinear hidden layer and a Softmax output layer. All inputs are one-hot encoded-i.e., represented as a binary vector with size equal to the number of possible input values, where all entries are zero except the entry at the index corresponding to the current input value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model design and implementation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The parameters of a neural network classifier with a single hidden layer and one-hot encoded inputs can be viewed as serving two distinct purposes: moving from inputs towards outputs, the first weight matrix that we encounter is responsible for learning distributed representations (or embeddings) of the inputs; the second weight matrix represents the parameters of a maximum entropy classifier that uses the learned embeddings as inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Considering the task of role-filler prediction, we would want these two sets of parameters to have the following properties:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 The classifier layer should be different for each target role, because the suitable filler given the context can clearly be very different depending on the role (e.g., verb vs. agent). \u2022 The embedding layer should also be different depending on the role of context word. Otherwise, the network would not have any information about the role of the context word. For example, the suitable verb filler for context word dog in an agent role is probably very different from what it would be, were it in a patient role (e.g. bark vs. feed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We now briefly describe some incrementally improved intermediate approaches that we also considered as they help to understand the steps that led to our final solution for achieving the desired properties of the embedding and classifier layer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A naive way to accomplish the aspired properties would be to have a separate model for each input role and target role pair. This approach has several drawbacks. For a start, there is no obvious way to model interactions of different input roles and fillers in order to make predictions based on multiple input role-word pairs simultaneously. Another problem is that the parameters are trained only on a fraction of available training data-e.g., verb embedding weights are trained independently for each target role classifier. Finally, given that we have chosen to distinguish between n different roles, it would require us to train and tune hyper-parameters for n 2 models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One of these problems (data under-utilization) can be alleviated by sharing role-specific embedding and classifier weights across different models. For example, the verb embedding matrix would be shared across all models that predict different role fillers based on input verbs. Other problems remain, and training the large number of models becomes even more difficult because of parameter synchronization, but this is a step towards the next improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Shared role-specific embedding and classifier weights enable us to combine all input-target role pair models into a single model. This can be done by stacking role-specific embedding matrices to form a 3-way embedding tensor and building a classifier parameter tensor analogously. Having a single model saves us from tuning multiple models and makes modelling interactions between inputs possible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Despite these advantages, having two tensors in our model has a drawback of rapidly growing the number of parameters as vocabulary size, number of roles, and hidden layer size increase. This may lead to over-fitting and increases training time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A more subtle weakness is the fact that this kind of model lacks parameter sharing across rolespecific embedding weight matrices. It is clear that some characteristics of words (e.g., semantics) usu-ally remain the same across different roles. Thus it is practical to share some information across rolespecific weights so that the embeddings can benefit from more data and learn better semantic representations while leaving room for role-specific traits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For these reasons we replace the tensors with their factored form in our models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two-part view of the model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Factoring classifier and embedding tensors helps to alleviate both the efficiency and parameter sharing problems brought out in Section 3.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given vocabulary size |V |, number of roles |R| and hidden layer size H, each tensor T would require |V | \u00d7 |R| \u00d7 H parameters. The number of parameters can be reduced by expressing the tensor as a sum of F rank-one tensors (Hitchcock, 1927) . This technique enables us to replace the tensor T with three factor matrices A, B and C. Each tensor element T [i, j, k] can then be written as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 241, |
|
"text": "(Hitchcock, 1927)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T [i, j, k] = F f =1 A[i, f ]B[j, f ]C[f, k]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Assuming lateral slices of T represent role-specific weight matrices (index j denotes roles), we write each role specific weight matrix W as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W = A diag(rB)C (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where r is a one-hot encoded role vector and diag is a function that returns a square matrix with the argument vector on the main diagonal and zeros elsewhere. For example, with a vocabulary of 50000 words, 7 roles and number of factors and hidden units equal to 512, the factorization reduces the number of parameters from 179M to 26M and greatly improves training speed. Factorization also enables parameter sharing, since factor matrices A and C are shared across all roles. Factored tensors have been used in different neural network models before. Starting with restricted Boltzmann machines, Memisevic and Hinton (2010) used a factored 3-way interaction tensor in their image transformation model. Sutskever et al. (2011) created a character level RNN LM that was efficiently able to use input character specific recurrent weights by using a factored tensor. Alum\u00e4e (2013) used a factored tensor in a multi-domain LM to be able to use a domain-specific hidden layer weight matrix that would take into account the differences while exploiting similarities between domains. A multi-modal LM by Kiros et al. (2014) uses a factored tensor to change the effective output layer weights based on image features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 598, |
|
"end": 625, |
|
"text": "Memisevic and Hinton (2010)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 704, |
|
"end": 727, |
|
"text": "Sutskever et al. (2011)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 878, |
|
"text": "Alum\u00e4e (2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1098, |
|
"end": 1117, |
|
"text": "Kiros et al. (2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It has been noticed before, that training models with factored tensors as parameters using gradient descent is difficult (Sutskever et al., 2011; Kiros et al., 2014) . As explained by Sutskever et al. (2011) , this is caused by the fact that each tensor element is represented as a product of three parameters, which may cause disproportionate updates if these three factors have magnitudes that are too different. Another problem is that if the factor matrix B happens to have too small or too large values, then this might also cause instabilities in the lower layers as the back-propagated gradients are scaled by rolespecific row of B in our model. This situation is magnified in our models, since we have not one, but two factored layers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 145, |
|
"text": "(Sutskever et al., 2011;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 165, |
|
"text": "Kiros et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 207, |
|
"text": "Sutskever et al. (2011)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To solve this problem, Sutskever et al. 2011suggest using 2nd order methods instead of gradient descent. Alum\u00e4e (2013) has alleviated the problem of shrinking back-propagated gradients by adding a bias (initialized with ones) to the domain-specific factor vector. We found that using AdaGrad (Duchi et al., 2011) to update the parameters is very effective. The method provides parameter-specific learning rates that depend on the historic magnitudes of the gradients of these parameters. This seems to neutralize the effect of vanishing or exploding gradients by reducing the step size for parameters that tend to have large gradients and allow a bigger learning rate for parameters with smaller gradients.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 118, |
|
"text": "Alum\u00e4e (2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 312, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Factored parameter tensors", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our general approach, common to both role-filler models, is shown in Figure 1 . First, role-specific word embedding vector e is computed by implicitly taking a fiber (word indexed row of a role indexed slice) from the factored embedding tensor: e = wA e diag(rB e )C e", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 77, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h = PReLU(e + b h )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where w and r are one-hot encoded word and role vectors respectively, b h is hidden layer bias, and A e , B e and C e represent the factor matrices that the embedding tensor is factored into. Next, we apply a parametric rectifier (PReLU; He et al., 2015) nonlinearity to the role-specific word embedding to obtain the hidden activation vector h.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The hidden layer activation vector h is fed to the Softmax output layer through a target role specific classifier weight matrix (a target role-indexed slice of the classifier parameter tensor):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c = hA c diag(tB c )C c (5) y = Softmax(c + b y )", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where t is a one-hot encoded target role vector, b y is output layer bias, and y is the output of the model representing the probability distribution over the output vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General structure of the model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The general approach described in Section 3.3 also allows us to model interactions between different input role-word pairs. If we know the order in which the inputs were introduced, then we can add a recurrent connection to the hidden layer to implement an incremental role filler predictor. When word order is unknown, then input role-word pair representations can be added together to compose the representation of the entire predicate context 1 . We chose addi- 1 In applications like natural language generation, for example, where role-fillers need to be predicted, it is not necessarily always the case that the order will be known in advance or that the thematic fit model will be used to generate the full sentence in correct word order. tion over concatenation (often preferred in language models) because the non-incremental model does not need to preserve information about word order, and addition also enables using a variable number of inputs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 466, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The incremental model adds information about the previous hidden state h t\u22121 to the current input word role-specific embedding e t through recurrent weights W r . So, Equation 4 is replaced with:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "h t = PReLU(e t + h t\u22121 W r + p t W p + b h ) (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where p t is a binary predicate boundary indicator that informs the model about the start of a new predicate and equals 1 when the target word belongs to a new predicate and 0 otherwise. The predicate boundary input p t is connected to the network through parameter vector W p . The hidden state h 0 is initialized to zeros.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The non-incremental model adds role-specific embedding vectors of all input words together to form the representation of the entire predicate context and replaces Equation 4 with:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h = PReLU( N i=1 e i + b h )", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where N is the number of input role-word pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling input interactions", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "First, we give details that are common to both the RNN and NN models. The models are trained with mini-batches of 128 samples. The hidden layer consists of 256 PReLU units; embedding and classifier tensor factorization layer sizes are 256 and 512 respectively. The input and output vocabularies are the same, consisting of 50,000 most frequent lemmatized words in the training corpus. The role vocabulary consists of 5 argument roles (ARG0, ARG1, ARGM-LOC, ARGM-TMP and ARGM-MNR), the verb is treated as the sixth role, and all the other roles are mapped to a shared OTHER label. Parameters are updated using AdaGrad (Duchi et al., 2011 ) with a learning rate of 0.1. All models are implemented using Theano (Bastien et al., 2012; Bergstra et al., 2010) and trained on GPUs for 8 days. RNN model gradients are computed using backpropagation through time (Rumelhart et al., 1986) over 3 time steps. The NN model is trained on minibatches of 128 samples that are randomly drawn with replacement from the training set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 617, |
|
"end": 636, |
|
"text": "(Duchi et al., 2011", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 730, |
|
"text": "(Bastien et al., 2012;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 731, |
|
"end": 753, |
|
"text": "Bergstra et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 878, |
|
"text": "(Rumelhart et al., 1986)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training details", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Perplexity allows us to compare all our models in similar terms, and evaluate the extent to which access to thematic roles helps the model to predict missing role fillers. For comparability, the perplexities of all models are computed only on content word probabilities (i.e., predicates and their arguments). We also report the 95% confidence interval for perplexity, which is computed according to Klakow and Peters (2002) . All models are trained on exactly the same sentences of lemmatized words. Probability mass is distributed across the vocabulary of the 50,000 most frequent content words in the training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 424, |
|
"text": "Klakow and Peters (2002)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model comparison", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "First, we compare our model to a conventional 3gram language model 3-gram LM, conditioning on the previous context containing the immediately preceding context of content and function words. All n-grams are discounted with Kneser-Ney smoothing, and n-gram probability estimates are interpolated with lower order estimates. Sentence onset in all models is padded with a special sentence onset tag. The vocabulary of context words for this model consists of all words from the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.6.1" |
|
}, |
|
{ |
|
"text": "As a second model, we train a 3-gram content word model 3-gram CWM, which is an N -gram LM that is trained only on content words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.6.1" |
|
}, |
|
{ |
|
"text": "Next, we have RNN CWM-an RNN LM (Mikolov et al., 2010) trained on content words only. The context size of this model is not explicitly defined and the model can potentially utilize more context words than 3-gram CWM (even from outside the sentence boundary).", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 54, |
|
"text": "(Mikolov et al., 2010)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.6.1" |
|
}, |
|
{ |
|
"text": "Our incremental role-filler RNN RF is similar to RNN CWM, except for using role-specific embedding and classifier weights (slices of factored tensor). It thus has additional information about the content word roles 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.6.1" |
|
}, |
|
{ |
|
"text": "Finally, the non-incremental role-filler NN RF loses the information about word order and the ability to use information outside predicate boundaries and trades it for the ability to see the future (i.e., the context includes both the preceding and the following content words and their roles).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.6.1" |
|
}, |
|
{ |
|
"text": "The results of content word perplexity evaluation are summarized in Table 1 . The thematicrole informed models outperform all other models by a very large margin, cutting perplexity almost in half. The incremental model achieves a slightly lower perplexity than the non-incremental one (237.8 vs. 241.9), hinting that the content word order and out-of-predicate role-word pairs can be even more informative than a preview of upcoming role-word pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 75, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.6.2" |
|
}, |
|
{ |
|
"text": "The difference between normal LM and the CWM can be explained by the loss of information from function words, combined with additional sparsity in the model because content word sequences are much sparser than sequences of content and function words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.6.2" |
|
}, |
|
{ |
|
"text": "This also explains why using a neural networkbased RNN CWM model improves the performance so much (perplexity drops from 834.9 to 473.2), as neural network based language models are well known for their ability to generalize well to unseen contexts by learning distributed representations of words (Bengio et al., 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 319, |
|
"text": "(Bengio et al., 2003)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.6.2" |
|
}, |
|
{ |
|
"text": "In order to see whether our model accurately represents events and their typical thematic role fillers, we evaluate our model on a range of existing datasets containing human thematic fit ratings. This evaluation also allows us to compare our model to existing models that have been used on this task. Table 2 : Thematic fit evaluation scores, consisting of Spearman's \u03c1 correlations between average human judgements and model output, with numbers of missing values (due to missing vocabulary entries) in brackets. The baseline scores come from the TypeDM (Baroni and Lenci, 2010) model, further developed and evaluated in Greenberg et al. (2015a,b) and the neural network predict model described in Baroni et al. (2014) . NN RF is the non-incremental model presented in this article. Our model maps ARG2 in Pado to OTHER role. Significances were calculated using paired two-tailed significance tests for correlations (Steiger, 1980) . NN RF was significantly better than both of the other models on the Greenberg and Ferretti location datasets and significantly better than BL2010 but not GSD2015 on McRae and Pado+McRae+Ferretti; differences were not statistically significant for Pado and Ferretti instruments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 556, |
|
"end": 580, |
|
"text": "(Baroni and Lenci, 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 649, |
|
"text": "Greenberg et al. (2015a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 700, |
|
"end": 720, |
|
"text": "Baroni et al. (2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 933, |
|
"text": "(Steiger, 1980)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation on thematic fit ratings", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "State-of-the-art computational models of thematic fit quantify the similarity between a role filler of a verb and the proto-typical filler for that role for the verb based on distributional vector space models. For example, the thematic fit of grass as a patient for the verb eat would be determined by the cosine of a distributional vector representation of grass and a prototypical patient of eat. The proto-typical patient is in turn obtained from averaging representations of words that typically occur as a patient of eat (e.g., Erk, 2007; Baroni and Lenci, 2010; Sayeed and Demberg, 2014; Greenberg et al., 2015b) . For more than one role, information from both the agent and the predicate can be used to jointly to predict a patient (e.g., Lenci, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 544, |
|
"text": "Erk, 2007;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 568, |
|
"text": "Baroni and Lenci, 2010;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 594, |
|
"text": "Sayeed and Demberg, 2014;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 619, |
|
"text": "Greenberg et al., 2015b)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 759, |
|
"text": "Lenci, 2011)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Previous studies obtained thematic fit ratings from humans by asking experimental participants to rate how common, plausible, typical, or appropriate some test role-fillers are for given verbs on a scale from 1 (least plausible) to 7 (most plausible) (McRae et al., 1998; Ferretti et al., 2001; Binder et al., 2001; Pad\u00f3, 2007; Pad\u00f3 et al., 2009; Vandekerckhove et al., 2009; Greenberg et al., 2015a) . The datasets include agent, patient, location and instrument roles. For example, in the Pad\u00f3 et al. (2009) dataset, the noun sound has a very low rating of 1.1 as the subject of hear and a very high rating of 6.8 as the object of hear. Each of the verb-role-noun triples was rated by several humans, and our evalua-tions are done against the average human score. The datasets differ from one another in size (as shown in Table 2 ), choice of verb-noun pairs, and in how exactly the question was asked of human raters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "(McRae et al., 1998;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 294, |
|
"text": "Ferretti et al., 2001;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 315, |
|
"text": "Binder et al., 2001;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 327, |
|
"text": "Pad\u00f3, 2007;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 346, |
|
"text": "Pad\u00f3 et al., 2009;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 375, |
|
"text": "Vandekerckhove et al., 2009;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 400, |
|
"text": "Greenberg et al., 2015a)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 509, |
|
"text": "Pad\u00f3 et al. (2009)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 824, |
|
"end": 831, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A major difference between what the state-of-theart models do and what our model does is that our model distributes a probability mass of one across the vocabulary, while the thematic fit models have no such overall constraint; they will assign a high number to all words that are similar to the prototypical vector, without having to distribute probability mass. Specifically, this implies that two synonymous fillers, one of which is a frequent word like fire, and the other of which is an infrequent word, e.g., blaze, will get similar ratings by the distributional similarity models, but quite different ratings by the neural network model, as the more frequent word will have higher probability. Greenberg et al. (2015a) showed that human ratings are insensitive to noun frequency. Hence, we report results that adjust for frequency effects by setting the output layer bias of the neural network model to zero. Since the output unit biases of the neural network model are independent from the inputs, they correlate strongly (r s = 0.74, p = 0.0) with training corpus word frequencies after being trained. Therefore, setting the learned output layer bias vector to a zero-vector is a simple way to reduce the effect of word frequencies on the model's output probability distribution. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 701, |
|
"end": 725, |
|
"text": "Greenberg et al. (2015a)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We can see that the neural network model outperforms the baselines on all the datasets except the Pado dataset. An error analysis on the role filler probabilities generated by the neural net points to the effect of level of constraint of the verb on the estimates. For a relatively non-constraining verb, the neural net model will have to distribute the probability mass across many different suitable fillers, while the semantic similarity models do not suffer from this. This implies that filler fit is not directly comparable across verbs in the NN model (only filler predictability is comparable). Per role results are shown in Table 3 . Surprisingly, the model output has the highest correlation with the averaged human judgements for the target role ARG2, despite the fact that ARG2 is mapped to OTHER along with several other roles. The model struggles the most when it comes to predicting fillers for ARG0. There is no noticeable correlation between the role-specific performance and the role occurrence frequency in the samples of our training set. This implies that parameter sharing between roles does indeed help when it comes to balancing the performance between rare and ubiquitous roles as discussed in section 3.1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 632, |
|
"end": 639, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The above thematic role fit data sets only assess the fit between two words. Our model can however also model the interaction between different roles; see Figure 2 for an example of model predictions. We are only aware of one small dataset that can be used to systematically test the effectiveness of the compositionality for this task. The Bicknell et al. (2010) spelling vs. mechanic check spelling and journalist check tires vs. mechanic check tires together with human congruity judgments. The goal in this task is for the model to reproduce the human judgments on the 64 sentence pairs. Lenci (2011), which we compare against in Table 4 , proposed a first compositional model based on TypeDM to evaluate on this task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 363, |
|
"text": "Bicknell et al. (2010)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 163, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 634, |
|
"end": 642, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Compositionality", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We use two accuracy scores for the evaluation, which we call \"Accuracy 1\" and \"Accuracy 2\". \"Accuracy 1\" counts a hit iff the model assigns the composed subject-verb combination a higher score when we test a human-rated better-fitting object in contrast with when we test a worse-fitting one; in other words, a hit is achieved when journalist check spelling should be better than journalist check tires, if we give the model journalist check as the predicate to test against different objects. (The result from Lenci for this task was transmitted by private communication.) \"Accuracy 2\" counts a hit iff, given an object, the composed subject-verb combination gives a higher score when the subject is better fitting. That is, a hit is achieved when journalist check spelling has a higher score than mechanic check spelling, setting the query to the model as journalist check and mechanic check and finding a score for spelling in that context. This accuracy metric is proposed and evaluated in Lenci (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 994, |
|
"end": 1006, |
|
"text": "Lenci (2011)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Evaluation shows that our model performs similarly to that of Lenci, although only limited conclusions can be drawn due to the small data set size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "To show that our model learns to represent input words and their roles in a useful way that reflects the meaning and interactions between inputs, we evaluate our non-incremental model on a sentence similarity task from Grefenstette and Sadrzadeh (2015) . We assign similarity scores to sentence pairs by computing representations for each sentence by tak- ing the hidden layer state (Equation 8) of the nonincremental model given the words in the sentence and their corresponding roles. Sentence similarity is then rated with the cosine similarity between the representations of the two sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 252, |
|
"text": "Grefenstette and Sadrzadeh (2015)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of event representations: sentence similarity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Spearman's rank correlation between the cosine similarities produced by our model and human ratings are shown in Table 5 . Our model achieves much higher correlation with human ratings than the best result reported by Grefenstette and Sadrzadeh (2015) , showing our model's ability to compose meaningful representations of multiple input words and their roles.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 251, |
|
"text": "Grefenstette and Sadrzadeh (2015)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of event representations: sentence similarity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We also compare our model with another NN word representation model baseline that does not embed role information; by this comparison, we can determine the size of the improvement brought by our role-specific embeddings. The baseline sentence representations are constructed by elementwise addition of pre-trained word2vec (Mikolov et al., 2013) word embeddings 3 . Scores are again computed by using cosine similarity. The large gap between our model's and word2vec baseline's performance illustrates the importance of embedding role information in word representations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 345, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of event representations: sentence similarity", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we proposed two neural network architectures for learning proto-typical event representa-3 https://code.google.com/p/word2vec/ # ratings NN RF Kronecker W2V Humans 199 0.34 0.26 0.13 0.62 tions. These models were trained to generate probability distributions over role fillers for a given semantic role. In our perplexity evaluation, we demonstrated that giving the model access to thematic role information substantially improved prediction performance. We also compared the performance of our model to the performance of current state-of-theart models in predicting human thematic fit ratings and showed that our model outperforms the existing models by a large margin. Finally, we also showed that the event representations from the hidden layer of our model are highly effective in a sentence similarity task. In future work, we intend to test the potential contribution of this model when applied to larger tasks such as entailment and inference tasks as well as semantic surprisal-based prediction tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A reviewer kindly points out, as a matter of historical interest, that the high-level architecture of the RNN RF model bears some resemblance to the parallel distributed processing model inMcClelland et al. (1989) andSt. John and McClelland (1990).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by the German Research Foundation (DFG) as part of SFB 1102: \"Information Density and Linguistic Encoding\" as well as the Cluster of Excellence \"Multimodal Computing and Interaction\" (MMCI). Also, the authors wish to thank the anonymous reviewers whose valuable ideas contributed to this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Multi-domain neural network language model", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Alum\u00e4e", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "INTERSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2182--2186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alum\u00e4e, T. (2013). Multi-domain neural network language model. In INTERSPEECH, pages 2182- 2186. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "238--247", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics, volume 1, pages 238-247.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Distributional memory: A general framework for corpus-based semantics", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Comput. Linguist", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "673--721", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baroni, M. and Lenci, A. (2010). Distributional memory: A general framework for corpus-based semantics. Comput. Linguist., 36(4):673-721.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Bastien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bergstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bergeron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Bouchard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I. J., Bergeron, A., Bouchard, N., and Bengio, Y. (2012). Theano: new features and speed improvements. Deep Learning and Unsu- pervised Feature Learning NIPS 2012 Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A neural probabilistic language model", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ducharme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janvin", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1137--1155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bengio, Y., Ducharme, R., Vincent, P., and Jan- vin, C. (2003). A neural probabilistic language model. The Journal of Machine Learning Re- search, 3:1137-1155.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Theano: a CPU and GPU math expression compiler", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bergstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Breuleux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Bastien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Desjardins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Warde-Farley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Python for Scientific Computing Conference (SciPy)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde- Farley, D., and Bengio, Y. (2010). Theano: a CPU and GPU math expression compiler. In Proceed- ings of the Python for Scientific Computing Con- ference (SciPy). Oral Presentation.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Effects of event knowledge in processing verbal arguments", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Bicknell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Mcrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kutas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "63", |
|
"issue": "4", |
|
"pages": "489--505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bicknell, K., Elman, J. L., Hare, M., McRae, K., and Kutas, M. (2010). Effects of event knowledge in processing verbal arguments. Journal of Memory and Language, 63(4):489-505.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The effects of thematic fit and discourse context on syntactic ambiguity resolution", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Binder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Duffy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "44", |
|
"issue": "2", |
|
"pages": "297--324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Binder, K. S., Duffy, S. A., and Rayner, K. (2001). The effects of thematic fit and discourse context on syntactic ambiguity resolution. Journal of Memory and Language, 44(2):297-324.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Exploiting syntactic structure for language modeling", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "225--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chelba, C. and Jelinek, F. (1998). Exploiting syntactic structure for language modeling. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 225-231. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Fast semantic extraction using a novel neural network architecture", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "560--567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collobert, R. and Weston, J. (2007). Fast semantic extraction using a novel neural network architec- ture. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 560-567, Prague, Czech Republic. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Modeling newswire events using neural networks for anomaly detection", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Dasigi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1414--1422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dasigi, P. and Hovy, E. H. (2014). Model- ing newswire events using neural networks for anomaly detection. In COLING, pages 1414- 1422.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adaptive subgradient methods for online learning and stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2121--2159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duchi, J., Hazan, E., and Singer, Y. (2011). Adap- tive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121-2159.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A simple, similarity-based model for selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erk, K. (2007). A simple, similarity-based model for selectional preferences. In Proceedings of the 45th Annual Meeting of the Association of Com- putational Linguistics, pages 216-223, Prague, Czech Republic. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Integrating verbs, situation schemas, and thematic role concepts", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Ferretti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Mcrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hatherell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "44", |
|
"issue": "4", |
|
"pages": "516--547", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferretti, T. R., McRae, K., and Hatherell, A. (2001). Integrating verbs, situation schemas, and thematic role concepts. Journal of Memory and Language, 44(4):516-547.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Verb polysemy and frequency effects in thematic fit modeling", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Greenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sayeed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greenberg, C., Demberg, V., and Sayeed, A. (2015a). Verb polysemy and frequency effects in thematic fit modeling. In Proceedings of the 6th Workshop on Cognitive Modeling and Com- putational Linguistics, pages 48-57, Denver, Col- orado. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Greenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sayeed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greenberg, C., Sayeed, A., and Demberg, V. (2015b). Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Concrete models and empirical evaluations for the categorical compositional distributional model of meaning", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grefenstette, E. and Sadrzadeh, M. (2015). Concrete models and empirical evaluations for the categor- ical compositional distributional model of mean- ing. Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Grissom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Morgan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1342--1352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grissom II, A. C., Boyd-Graber, J., He, H., Morgan, J., and Daum\u00e9 III, H. (2014). Don't until the final verb wait: Reinforcement learning for simultane- ous machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 1342- 1352.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1502.01852" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He, K., Zhang, X., Ren, S., and Sun, J. (2015). Delv- ing deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The expression of a tensor or a polyadic as a sum of products", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hitchcock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1927, |
|
"venue": "Journal of Mathematics and Physics", |
|
"volume": "", |
|
"issue": "6", |
|
"pages": "164--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hitchcock, F. L. (1927). The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics and Physics, (6):164-189.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Multimodal neural language models", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "595--603", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Multimodal neural language models. In Proceed- ings of the 31st International Conference on Ma- chine Learning (ICML-14), pages 595-603.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Testing the correlation of word error rate and perplexity", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Speech Communication", |
|
"volume": "38", |
|
"issue": "1", |
|
"pages": "19--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klakow, D. and Peters, J. (2002). Testing the cor- relation of word error rate and perplexity. Speech Communication, 38(1):19-28.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Semantic role labeling improves incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1191--1201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstas, I. and Keller, F. (2015). Semantic role labeling improves incremental parsing. In Pro- ceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1191-1201, Beijing, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Composing and updating verb argument expectations: A distributional semantic model", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2Nd Workshop on Cognitive Modeling and Computational Linguistics, CMCL '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lenci, A. (2011). Composing and updating verb ar- gument expectations: A distributional semantic model. In Proceedings of the 2Nd Workshop on Cognitive Modeling and Computational Linguis- tics, CMCL '11, pages 58-66, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Sentence comprehension: A parallel distributed processing approach. Language and cognitive processes", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "St", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Taraban", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "287--335", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McClelland, J. L., St. John, M., and Taraban, R. (1989). Sentence comprehension: A parallel dis- tributed processing approach. Language and cog- nitive processes, 4(3-4):SI287-SI335.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Mcrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Spivey-Knowlton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tanenhaus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "38", |
|
"issue": "3", |
|
"pages": "283--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McRae, K., Spivey-Knowlton, M. J., and Tanen- haus, M. K. (1998). Modeling the influence of thematic fit (and other constraints) in on-line sen- tence comprehension. Journal of Memory and Language, 38(3):283-312.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Learning to represent spatial transformations with factored higher-order Boltzmann machines", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Memisevic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Neural Computation", |
|
"volume": "22", |
|
"issue": "6", |
|
"pages": "1473--1492", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Memisevic, R. and Hinton, G. E. (2010). Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural Com- putation, 22(6):1473-1492.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Recurrent neural network based language model", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Karafi\u00e1t", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Burget", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Cernock\u1ef3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "INTER-SPEECH 2010, 11th Annual Conference of the International Speech Communication Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1045--1048", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikolov, T., Karafi\u00e1t, M., Burget, L., Cernock\u1ef3, J., and Khudanpur, S. (2010). Recurrent neu- ral network based language model. In INTER- SPEECH 2010, 11th Annual Conference of the In- ternational Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045-1048.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing sys- tems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The integration of syntax and semantic plausibility in a wide-coverage model of human sentence processing", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pad\u00f3, U. (2007). The integration of syntax and se- mantic plausibility in a wide-coverage model of human sentence processing. PhD thesis, Saarland University.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A probabilistic model of semantic plausibility in sentence processing", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Crocker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Cognitive Science", |
|
"volume": "33", |
|
"issue": "5", |
|
"pages": "794--838", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pad\u00f3, U., Crocker, M. W., and Keller, F. (2009). A probabilistic model of semantic plausibil- ity in sentence processing. Cognitive Science, 33(5):794-838.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Learning representations by backpropagating errors", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Rumelhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "NATURE", |
|
"volume": "323", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back- propagating errors. NATURE, 323:9.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Combining unsupervised syntactic and semantic models of thematic fit", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sayeed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the first Italian Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sayeed, A. and Demberg, V. (2014). Combining unsupervised syntactic and semantic models of thematic fit. In Proceedings of the first Italian Conference on Computational Linguistics (CLiC- it 2014).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "An exploration of semantic features in an unsupervised thematic fit evaluation framework", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sayeed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Shkadzko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Emerging Topics at the First Italian Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "25--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sayeed, A., Demberg, V., and Shkadzko, P. (2015). An exploration of semantic features in an unsu- pervised thematic fit evaluation framework. In IJ- CoL vol. 1, n. 1 december 2015: Emerging Topics at the First Italian Conference on Computational Linguistics, pages 25-40. Accademia University Press.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Learning and applying contextual constraints in sentence comprehension", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "St", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mcclelland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Artificial Intelligence", |
|
"volume": "46", |
|
"issue": "1-2", |
|
"pages": "217--257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "St. John, M. F. and McClelland, J. L. (1990). Learning and applying contextual constraints in sentence comprehension. Artificial Intelligence, 46(1-2):217-257.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Tests for comparing elements of a correlation matrix", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Steiger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Psychological Bulletin", |
|
"volume": "87", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steiger, J. H. (1980). Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87(2):245.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Generating text with recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Martens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1017--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sutskever, I., Martens, J., and Hinton, G. E. (2011). Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017- 1024.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "A scalable distributed syntactic, semantic, and lexical language model", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics", |
|
"volume": "38", |
|
"issue": "3", |
|
"pages": "631--671", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tan, M., Zhou, W., Zheng, L., and Wang, S. (2012). A scalable distributed syntactic, semantic, and lexical language model. Computational Linguis- tics, 38(3):631-671.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "A neural network approach to selectional preference acquisition", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Van De Cruys", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Van de Cruys, T. (2014). A neural network approach to selectional preference acquisition. In Proceed- ings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 26-35.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "A robust and extensible exemplar-based model of thematic fit", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Vandekerckhove", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sandra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "EACL 2009, 12th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "826--834", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vandekerckhove, B., Sandra, D., and Daelemans, W. (2009). A robust and extensible exemplar-based model of thematic fit. In EACL 2009, 12th Con- ference of the European Chapter of the Associa- tion for Computational Linguistics, Proceedings of the Conference, Athens, Greece, March 30 - April 3, 2009, pages 826-834.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "General structure of role-filler models.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Examples of model predictions for the verb serve with different agents and target roles patient and location.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Per role thematic-fit evaluation scores in terms of Spearmans \u03c1 correlations between average human judgements and model output." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>NN RF Lenci 2011 0.671 Accuracy 1 0.687 Model Accuracy 2 0.828 0.844</td></tr></table>", |
|
"html": null, |
|
"text": "dataset contains triples like journalist check" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Accuracies on the Bicknell evaluation task." |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Sentence similarity evaluation scores on GS2013 dataset (Grefenstette and Sadrzadeh, 2015), consisting of Spearman's \u03c1 correlations between human judgements and model output. Kronecker is the best performing model from Grefenstette and Sadrzadeh (2015). NN RF is the non-incremental model presented in this article, and W2V is the word2vec baseline. Human performance (inter-annotator agreement) shows the upper bound." |
|
} |
|
} |
|
} |
|
} |