Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:50:59.722497Z"
},
"title": "When is multitask learning effective? Semantic sequence prediction under varying data conditions",
"authors": [
{
"first": "H\u00e9ctor",
"middle": [],
"last": "Mart\u00ednez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "INRIA",
"location": {
"country": "France"
}
},
"email": "[email protected]"
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to datadependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"pdf_parse": {
"paper_id": "E17-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to datadependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The recent success of recurrent neural networks (RNNs) for sequence prediction has raised a great deal of interest, which has lead researchers to propose competing architectures for several language-processing tasks. These architectures often rely on multitask learning (Caruana, 1997) .",
"cite_spans": [
{
"start": 270,
"end": 285,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multitask learning (MTL) has been applied with success to a variety of sequence-prediction tasks including chunking and tagging (Collobert et al., 2011; Bjerva et al., 2016; Plank, 2016) , name error detection (Cheng et al., 2015) and machine translation (Luong et al., 2016) . However, little is known about MTL for tasks which are more semantic in nature, i.e., tasks that aim at labeling some aspect of the meaning of words (Cruse, 1986) , instead their morphosyntactic behavior. In fact, results on semantic tasks are either mixed (Collobert et al., 2011) or, due to the file drawer bias (Rosenthal, 1979) , simply not reported. There is no prior study-to the best of our knowledge-that compares datadependent conditions with performance measures to shed some light on when MTL works for semantic sequence prediction. Besides any variation in annotation and conceptualization, the label distributions of such semantic tasks tends to be very different to the characteristic distributions expected in more frequently studied morphosyntactic tasks such as POS-tagging.",
"cite_spans": [
{
"start": 128,
"end": 152,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 153,
"end": 173,
"text": "Bjerva et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 174,
"end": 186,
"text": "Plank, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 210,
"end": 230,
"text": "(Cheng et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 255,
"end": 275,
"text": "(Luong et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 427,
"end": 440,
"text": "(Cruse, 1986)",
"ref_id": "BIBREF11"
},
{
"start": 535,
"end": 559,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 592,
"end": 609,
"text": "(Rosenthal, 1979)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contribution of this work is an evaluation of MTL on semantic sequence prediction on data-dependent conditions. We derive characteristics of datasets that make them favorable for MTL, by comparing performance with information-theoretical metrics of the label frequency distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use an off-the-shelf state-of-the-art architecture based on bidirectional Long-Short Term Memory (LSTM) models (Section 3) and evaluate its behavior on a motivated set of main and auxiliary tasks. We gauge the performance of the MTL setup (Section 4) in the following ways: i) we experiment with different combinations of main and auxiliary tasks, using semantic tasks as main task and morphosyntactic tasks as auxiliary tasks; ii) we apply FREQBIN, a frequency-based auxiliary task (see Section 2.5) to a series of languageprocessing tasks and evaluate its contribution, and iii) for POS we experiment with different data sources to control for label inventory size and corpus source for the auxiliary task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From our empirical study we observe the MTL architecture's sensitivity to label distribution properties, and its preference for compact, mid-entropy distributions. Additionally, we provide a novel parametric refinement of the FREQBIN auxiliary task that is more robust. In broader terms, we expect to motivate more thorough analysis of the performance of neural networks in MTL setups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multitask learning systems are often designed with the intention of improving a main task by incorporating joint learning of one or more related auxiliary tasks. For example, training a MTL model for the main task of chunking and treating part-of-speech tagging (POS) as auxiliary task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing multi-task learning",
"sec_num": "2"
},
{
"text": "The working principle of multitask learning is to improve generalization performance by leveraging training signal contained in related tasks (Caruana, 1997) . This is typically done by training a single neural network for multiple tasks jointly, using a representation that is shared across tasks. The most common form of MTL is the inclusion of one output layer per additional task, keeping all hidden layers common to all tasks. Task-specific output layers are customarily placed at the outermost layer level of the network.",
"cite_spans": [
{
"start": 142,
"end": 157,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing multi-task learning",
"sec_num": "2"
},
{
"text": "In the next section, we depict all main and auxiliary tasks considered in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing multi-task learning",
"sec_num": "2"
},
{
"text": "We use the following main tasks, aimed to represent a variety of semantic sequence labeling tasks. FRAMES: We use the FrameNet 1.5 (Baker et al., 1998) annotated corpus for a joint frame detection and frame identification tasks where a word can receive a predicate label like Arson or Personal success. We use the data splits from Hermann et al., 2014) . While frame identification is normally treated as single classification, we keep the sequence-prediction paradigm so all main tasks rely on the same architecture. SUPERSENSES: We use the supersense version of SemCor (Miller et al., 1993) from (Ciaramita and Altun, 2006) , with coarse-grained semantic labels like noun.person or verb.change. NER: The CONLL2003 shared-task data for named entity recognition for labels Person, Loc, etc. (Tjong Kim Sang and De Meulder, 2003) .",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 331,
"end": 352,
"text": "Hermann et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 571,
"end": 592,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF29"
},
{
"start": 598,
"end": 625,
"text": "(Ciaramita and Altun, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 791,
"end": 828,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main tasks",
"sec_num": "2.1"
},
{
"text": "We have used the EurWordNet list of ontological types for senses (Vossen et al., 1998) to convert the SUPERSENSES into coarser semantic traits like Animate or UnboundedEvent. 1 MPQA: The Multi-Perspective Question Answering (MPQA) corpus (Deng and Wiebe, 2015) , which contains sentiment information among others. We use the annotation corresponding to the coarse level of annotation, with labels like attitude and direct-speech-event.",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Vossen et al., 1998)",
"ref_id": "BIBREF38"
},
{
"start": 238,
"end": 260,
"text": "(Deng and Wiebe, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SEMTRAITS:",
"sec_num": null
},
{
"text": "We have chosen auxiliary tasks that represent the usual features based on frequency and morphosyntax used for prediction of semantic labels. We collectively refer to them as lower-level tasks. CHUNK: The CONLL2003 shared-task data for noun-and verb-phrase chunking (Tjong Kim Sang and De Meulder, 2003) . DEPREL: The dependency labels for the English Universal Dependencies v1.3 (Nivre et al., 2016) . FREQBIN: The log frequency of each word, treated as a discrete label, cf. Section 2.5. POS: The part-of-speech tags for the Universal Dependencies v1.3 English treebank. Table 1 lists the datasets used in this paper, both to train main tasks and auxiliary tasks. For each dataset we list the following metrics: number of sentences, number of tokens, token-type ratio (TTR), the size of the label inventory counting Blabels and I-labels as different (|Y |), and the proportion of out-of-span labels, which we refer to as O labels.",
"cite_spans": [
{
"start": 265,
"end": 302,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF37"
},
{
"start": 379,
"end": 399,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 572,
"end": 579,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Auxiliary tasks",
"sec_num": "2.2"
},
{
"text": "The table also provides some of the information-theoretical measures we describe in Section 2.4. Note that DEPRELS and POS are the only datasets without any O labels, while FRAMES and SEMTRAITS are the two tasks with O labels but no B/I-span notation, as tokens are annotated individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data properties",
"sec_num": "2.3"
},
{
"text": "In order to quantify the properties of the different label distributions, we calculate three informationtheoretical quantities based on two metrics, kurtosis and entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic measures",
"sec_num": "2.4"
},
{
"text": "Entropy is the best-known informationtheoretical metric. It indicates the amount of uncertainty in a distribution. We calculate two variants of entropy, one taking all labels in consideration H(Y f ull ), and another one H(Y \u2212O ) where we discard the O label and only measure the entropy for the named labels, such as frame names in FRAMES. The entropy of the label distribution H(Y f ull ) is always lower than the entropy for the distribution disregarding the O label H(Y \u2212O ). This difference is a consequence Table 1 : Datasets for main tasks (above) and auxiliary tasks (below) with their number of sentences, tokens, type-token ratio, size of label inventory, proportion of O labels, kurtosis of the label distribution, entropy of the label distribution, and entropy of the label distribution without the O label.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Information-theoretic measures",
"sec_num": "2.4"
},
{
"text": "of the O-label being often the majority class in span-annotated datasets. The only exception is CHUNK, where O-tokens make up 14% of the total, and the full-distribution entropy is higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic measures",
"sec_num": "2.4"
},
{
"text": "Kurtosis indicates the skewness of a distribution and provides a complementary perspective to the one given by entropy. The kurtosis of the label distribution describes its tailedness, or lack thereof. The kurtosis for a normal distribution is 3, and higher kurtosis values indicate very tailed distributions, while lower kurtosis values indicate distributions with fewer outliers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic measures",
"sec_num": "2.4"
},
{
"text": "For instance, we can see that larger inventory sizes yield more heavy-tailed distributions, e.g. FRAMES presents a lot of outliers and has the highest kurtosis. The very low value for POS indicates a distribution that, although Zipfian, has very few outliers as a result of the small label set. In contrast, DEPRELS, coming from the same corpus, has about three times as many labels, yielding a distribution that has fewer mid-values while still being less than 3. Nevertheless, the entropy values of POS and DEPRELS are similar, so kurtosis provides a complementary perspective on the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic measures",
"sec_num": "2.4"
},
{
"text": "Recently, a simple auxiliary task has been proposed with success for POS tagging: predicting the log frequency of a token . The intuition behind this model is that the auxiliary loss, predicting word frequency, helps differentiate rare and common words, thus providing better predictions for frequency-sensitive labels. They refer to this auxiliary task as FREQBIN, however, focus on POS only. used the discretized log frequency of the current word to build the FREQBIN auxiliary task to aid POS tagging, with good results. This auxiliary task aids the prediction of the main task (POS) in about half the languages, and improves the prediction of out of vocabulary words. Therefore, it is compelling to assess the possible contribution of FREQBIN for other tasks, as it can be easily calculated from the same training data as the main task, and requires no external resources or annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FREQBIN variants",
"sec_num": "2.5"
},
{
"text": "We experiment with three different variants of FREQBIN, namely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FREQBIN variants",
"sec_num": "2.5"
},
{
"text": "1. SKEWED 10 : The original formulation of a = int(log 10 (f reqtrain(w)), where a is the frequency label of the word w. Words not in the training data are treated as hapaxes. 2. SKEWED 5 : A variant using 5 as logarithm base, namely a = int(log 5 (f reqtrain(w)), aimed at providing more label resolution, e.g. for the NER data, SKEWED 10 yields 4 different labels, and SKEWED 5 yields 6. 3. UNIFORM: Instead of binning log frequencies, we take the index of the k-quantilized cumulative frequency for a word w. We use this parametric version of FREQBIN with the median number of labels produced by the previous variants to examine the importance of the label distribution being skewed. For k=5, this variant maximizes the entropy of a FREQBIN five-label distribution. Note that this method still places all hapaxes and outof-vocabulary words of the test data in the same frequency bin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FREQBIN variants",
"sec_num": "2.5"
},
{
"text": "Even though we could have used a reference corpus to have the same FREQBIN for all the data, we prefer to use the main-task corpus for FRE-QBIN. Using an external corpus would otherwise lead to a semisupervised learning scenario which is out of the scope of our work. Moreover, in us-ing only the input corpus to calculate frequency we replicate the setup of Plank et al. (2016) more closely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FREQBIN variants",
"sec_num": "2.5"
},
{
"text": "Recurrent neural networks (RNNs) (Elman, 1990; Graves and Schmidhuber, 2005) allow the computation of fixed-size vector representations for word sequences of arbitrary length. An RNN is a function that reads in n vectors x 1 , ..., x n and produces a vector h n , that depends on the entire sequence x 1 , ..., x n . The vector h n is then fed as an input to some classifier, or higher-level RNNs in stacked/hierarchical models. The entire network is trained jointly such that the hidden representation captures the important information from the sequence for the prediction task.",
"cite_spans": [
{
"start": 33,
"end": 46,
"text": "(Elman, 1990;",
"ref_id": "BIBREF15"
},
{
"start": 47,
"end": 76,
"text": "Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "A bi-directional recurrent neural network (Graves and Schmidhuber, 2005) is an extension of an RNN that reads the input sequence twice, from left to right and right to left, and the encodings are concatenated. An LSTM (Long Short-Term Memory) is an extension of an RNN with more stable gradients (Hochreiter and Schmidhuber, 1997) . Bi-LSTM have recently successfully been used for a variety of tasks (Collobert et al., 2011; Huang et al., 2015; Kiperwasser and Goldberg, 2016; Liu et al., 2015; . For further details, cf. Goldberg (2015) and Cho (2015) .",
"cite_spans": [
{
"start": 42,
"end": 72,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 296,
"end": 330,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 401,
"end": 425,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 426,
"end": 445,
"text": "Huang et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 446,
"end": 477,
"text": "Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 478,
"end": 495,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 523,
"end": 538,
"text": "Goldberg (2015)",
"ref_id": "BIBREF16"
},
{
"start": 543,
"end": 553,
"text": "Cho (2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We use an off-the-shelf bidirectional LSTM model ). 2 The model is illustrated in Figure 1 . It is a context bi-LSTM taking as input word embeddings w. Character embeddings c are incorporated via a hierarchical bi-LSTM using a sequence bi-LSTM at the lower level . The character representation is concatenated with the (learned) word embeddings w to form the input to the context bi-LSTM at the upper layers. For hyperparameter settings, see Section 3.1.",
"cite_spans": [
{
"start": 52,
"end": 53,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "The stacked bi-LSTMs represent the shared layers between tasks. We here use three stacked (h=3) bi-LSTMs for the upper layer, and a single layer bi-LSTM at the lower level for the character representations. Following Collobert et al. (2011) , at the outermost (h = 3) layer separate output layers for the single tasks are added using a Figure 1 : Multi-task bi-LSTM. The input to the model are word w and character embeddings c (from the lower bi-LSTM). The model is a stacked 3-layer bi-LSTM with separate output layers for the main task (solid line) and auxiliary tasks (dashed line; only one auxiliary task shown in the illustration). softmax. We additionally experiment with predicting lower-level tasks at inner layers, i.e., predicting POS at h = 1, while the main task at h = 3, the outermost layer, following . During training, we randomly sample a task and instance, and backpropagate the loss of the current instance through the shared deep network. In this way, we learn a joint model for main and auxiliary task(s).",
"cite_spans": [
{
"start": 217,
"end": 240,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "All the experiments in this article use the same bi-LSTM architecture described in Section 3. We train the bi-LSTM model with default parameters, i.e., SGD with cross-entropy loss, no minibatches, 30 epochs, default learning rate (0.1), 64 dimensions for word embeddings, 100 for character embeddings, 100 hidden states, random initialization for the embeddings, Gaussian noise with \u03c3=0.2. We use a fixed random seed set upfront to facilitate replicability. The only hyperparameter we further examine is the number of epochs, which is set to 30 unless otherwise specified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "3.1"
},
{
"text": "We follow the approach of Collobert et al. (2011) in that we do not use any task-specific features beyond word and character information, nor do we use pre-trained word embeddings for initialisation or more advanced optimization techniques. 3 While any of these changes would likely improve the performance of the systems, the goal of our experiments is to delimit the behavior of the bi-LSTM architecture and the interaction between main and auxiliary task(s).",
"cite_spans": [
{
"start": 241,
"end": 242,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "3.1"
},
{
"text": "A system in our experiments is defined by a main task and up to two auxiliary tasks, plus a choice of output layers (at which layer to predict the auxiliary task, i.e., h \u2208{1,2,3}). For each main task, we ran the following systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Overview",
"sec_num": "3.2"
},
{
"text": "1. Baseline, without any auxiliary task. 2. One additional system for each auxiliary task, say DEPREL. 3. A combination of each of the three versions of FREQBIN, namely SKEWED 5 ,SKEWED 10 and UNIFORM, and each of the other auxiliary tasks, such as DEPREL+UNIFORM. The total combination of systems for all five main tasks is 1440.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Overview",
"sec_num": "3.2"
},
{
"text": "This section describes the results of both experimental scenarios, namely the benchmarking of FREQBIN as an auxiliary task, and the combinations of semantic main task with low-level auxiliary tasks, including an analysis of the data properties. The different tasks in our experiments typically use different evaluation metrics, however we evaluate all tasks on micro-averaged F1 without the O class, which we consider the most informative overall. We do not use the O-label's F1 score because it takes recall into consideration, and it is deceptively high for the majority class. We test for significance with a 10K-iteration bootstrap sample test, and p < .05.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "This section presents the results for the prediction of the main semantic tasks described in Section 2. Given the size of the space of possible task combinations for MTL, we only report the baseline and the results of the best system. Table 2 presents the results for all main semantic tasks, comparing the results of the best system with the baseline. The last column indicates the amount of systems that beat the baseline for a given certain main task. Having fixed the variant of FREQBIN to UNIFORM (see Section 4.2), and the number of epochs to 30 (see below) on development data, the total amount of systems for any main task is 22.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "Out of the two main tasks over the baseline only SEMTRAITS is significantly better over BL. SEM-TRAITS has a small label set, so the system is able to learn shared parameters for the label combinations of main and aux without suffering from too Table 2 : Baseline (BL) and best system performance difference (\u2206) for all main tasksimprovements in bold, significant improvements underlined-plus number of systems over baseline for each main task. much sparsity. Compare with the dramatic loss of the already low-performing FRAMES, which has the highest kurtosis caused by the very long tail of low-frequency labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "We have expected CHUNK to aid SUPER-SENSES, but in spite of our expectations, other low-level tasks do not aid in general the prediction of high-level task. What is otherwise an informative feature for a semantic task in single-task learning does not necessarily lend itself as an equally useful auxiliary task for MTL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "For a complementary evaluation, we have also measured the precision of the O label. However, precision score is also high, above 90, for all tasks except the apparently very difficult MPQA (70.41 for the baseline). All reported systems degrade around 0.50 points with regards to the baseline, except SUPERSENSES which improves slightly form 96.27 to 96.44. The high precision obtained for the also very difficult FRAMES tasks suggests that this architecture, while not suitable for frame disambiguation, can be used for frame-target identification. Disregarding FREQBIN, the only low-level tasks that seems to aid prediction is POS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "An interesting observation from the BIO task analysis is that while the standard bi-LSTM model used here does not have a Viterbi-style decoding like more complex systems (Ma and Hovy, 2016; Lample et al., 2016) , we have found very few invalid BIO sequences. For NER, there are only ten I-labels after an O-label, out of the 27K predicted by the bi-LSTM. For SUPERSENSES there are 59, out of 1,5K predicted I-labels.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Ma and Hovy, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 190,
"end": 210,
"text": "Lample et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "The amount of invalid predicted sequences is lower than expected, indicating that an additional decoding layer plays a smaller role in prediction quality than label distribution and corpus size, e.g. NER is a large dataset with few labels, and the system has little difficulty in learning label precedences. For larger label sets or smaller data sizes, invalid sequence errors are bound to appear because of sparseness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main semantic tasks",
"sec_num": "4.1"
},
{
"text": "We observe no systematic tendency for an output layer to be a better choice, and the results of choosing the inneror outer-layer (h=1 vs h=3) input differ only minimally. However, both systems that include POS have a preference for the inner layer having higher performance, which is consistent with the results for POS in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of output layer choice",
"sec_num": null
},
{
"text": "Effect of the number of training epochs Besides all the data properties, the only hyperparameter that we examine further is the number of network training epochs. 4 All the results reported in this article have been obtained in a 30-epoch regime. However, we have also compared system performance with different numbers of epochs. Out of the values we have experimented (5,15,30,50) with, we recommend 30 iterations for this architecture. At 5 and 15 epochs, the performance does not reach the levels for 30 and is consistently worse for baselines and auxiliarytask systems. Moreover, the performance for 50 is systematically worse than for 30, which indicates overfitting at this point.",
"cite_spans": [
{
"start": 163,
"end": 164,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of output layer choice",
"sec_num": null
},
{
"text": "We have run all systems increasing the size of the main task training data in blocks of 25%, keeping the size of the auxiliary task constant. We do not observe improvements over baseline along the learning curve for any of the main tasks except MPQA and SEM-TRAITS. At smaller main task data sizes, the auxiliary task learning swamps the training of the main task. This results is consistent with the findings by Luong et al. (2016) . We leave the research on the effects auxiliary data size-and its size ratio with regards to the main task-for further work.",
"cite_spans": [
{
"start": 413,
"end": 432,
"text": "Luong et al. (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of training data size",
"sec_num": null
},
{
"text": "As follows from the results so far, the bi-LSTM will not benefit from auxiliary loss if there are many labels and entropy is too high. Auxiliary task level distribution also plays a role, as we will discuss in Section 4.3, FREQBIN-UNIFORM consistently outperforms the skewed measure with base 5 and 10. Therefore we have also measured the effect of using different sources of POS auxiliary data to give account for the possible differences in label inventory and corpus for all tasks, high and lowlevel, cf. Table 3 . The English UD treebank is distributed with Universal POS (UPOS), which we use throughout this article, and also with Penn Treebank (PTB) tags (Marcus et al., 1993) . We have used the PTB version of the English UD corpus (UD/PTB) as well as the training section of the Wall Street Journal (WSJ) treebank as of POS (WSJ/PTB) auxiliary task. The former offers the opportunity to change the POS inventory to the three times larger PTB inventory while using the same corpus.",
"cite_spans": [
{
"start": 661,
"end": 682,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 508,
"end": 515,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Auxiliary task contribution",
"sec_num": "4.2"
},
{
"text": "However, the characteristics of the UD/UPOS we have used as POS throughout the article makes it a more suitable auxiliary source, in fact it systematically outperforms the other two. We argue that UD/UPOS has enough linguistic signal to be a useful auxiliary task, while still depending on a smaller label inventory. Interestingly, if we use POS for CHUNK (cf. Table 3 ), note that even though the language in WSJ is closer to the language in the training corpora for CHUNK and NER, it is not the best auxiliary POS source for either task.",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 368,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Auxiliary task contribution",
"sec_num": "4.2"
},
{
"text": "We observe an improvement when using UD/PTB for POS, while using WSJ/PTB worsens the results for this task. We argue that this architecture benefits from the scenario where the same corpus is used to train with two different label sets for POS, whereas using a larger label set and a different corpus does not aid prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Auxiliary task contribution",
"sec_num": "4.2"
},
{
"text": "In this section we evaluate the interaction between all tasks and the FREQBIN auxiliary task. For this purpose, we treat all tasks (high-or low-level) as main task, and compare the performance of a single-task baseline run, with a task +FREQBIN setup. We have compared the three versions of FREQBIN (Section 2.5) but we only report UNI-FORM, which consistently outperforms the other two variants, according to our expectations. Table 4 lists all datasets with the size of their label inventory for reference (|Y |), as well as the absolute difference in performance between the FREQBIN-UNIFORM system and the baseline (\u2206). Systems that beat the baseline are marked in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analyzing FREQBIN",
"sec_num": "4.3"
},
{
"text": "Following , the FREQBIN system beats the baseline for the POS task. Moreover, it also aids the prediction for SEMTRAITS and MPQA. The better performance of these two systems indicates that this architecture is not necessarily only advisable for lower-level tasks, as long as the datasets have the right data properties. Table 4 : Label inventory size (|Y |), FREQBINbaseline absolute difference in performance (\u2206)improvements are in bold, significant improvements are underlined-and coefficient of determination for label-to-frequency regression (R 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analyzing FREQBIN",
"sec_num": "4.3"
},
{
"text": "The improvement of low-level classes is clear in the case of POS. We observe an improvement from 75 to 80 for the X label, mostly made up of low-frequency items. The similarly scattered label INTJ goes from 84 to 87. While no POS label drops in performance on +FREQBIN with regards to the baseline, all the other improvements are of 1 point of less.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing FREQBIN",
"sec_num": "4.3"
},
{
"text": "To supplement the benchmarking of FREQBIN, we estimate how much frequency information is contained in all the linguistic sequence annotations used in this article. We do so by evaluating the coefficient of determination (R 2 ) of a linear regression model to predict the log frequency of a word given its surrounding label trigram, which we use as a proxy for sequence prediction. For instance, for 'the happy child', it would attempt to predict the log-frequency of happy given the 'DET ADJ NOUN' POS trigram. Note that this model is delexicalized, and only uses task labels because its goal is to determine how much word-frequency information is contained in e.g. the POS sequence. A high R 2 indicates there is a high proportion of the variance of log frequency explained by the label trigram. We use linear regression implemented in sklearn with L2 regularization and report the average R 2 of 10-fold cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label-frequency co-informativeness",
"sec_num": "4.4"
},
{
"text": "POS is the label set with the highest explanatory power over frequency, which is expectable: determiners, punctuations and prepositions are high-frequency word types, whereas hapaxes are more often closed-class words. DEPRELS sequences contain also plenty of frequency information. Three sequence tasks have similar scores under .50, namely CHUNK, SUPERSENSE and SEM-TRAITS. They all have in common that their O class is highly indicative of function words, an argument supported by their similar values of fulldistribution entropy. The one with the lowest score out of these three, namely SEMTRAITS is the one with the least grammatical information, as it does not contain part of speech-related labels. The (R2) is very low for the remaining tasks, and indeed, for FRAMENET it is a very small negative number which rounds up to zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label-frequency co-informativeness",
"sec_num": "4.4"
},
{
"text": "While the co-informativeness of FREQBIN with regards to its main task is a tempting explanation, it does not fully explain when it works as an auxiliary task. Indeed, the FREQBIN contribution at handling out-of-vocabulary words seems to only affect POS and SEMTRAITS, while it does not improve DEPRELS, which normally depends on syntactic trees for accurate prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label-frequency co-informativeness",
"sec_num": "4.4"
},
{
"text": "In this section we alter the network to study the effect of network width and character representations. Multitask learning allows easy sharing of parameters for different tasks. Part of the explanation for the success of multitask learning are related to net capacity (Caruana, 1997) . Enlarg-ing a network's hidden layers reduces generalization performance, as the network potentially learns dedicated parts of the hidden layer for different tasks. This means that the desirable trait of parameter sharing of MTL is lost. To test this property, we train a MTL network for all setups where we increase the size of the hidden layer by a factor k, where k is the number of auxiliary tasks.",
"cite_spans": [
{
"start": 269,
"end": 284,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Net capacity and contribution of character representation",
"sec_num": "5"
},
{
"text": "Our results confirm that increasing the size of the hidden layers reduces generalization performance. This is the case for all setups. None of the results is better than the best systems in Table 2 , and the effective number of systems that outperform the baseline are fewer (FRAMES 0, MPQA: 2, NER: 0, SEMTRAITS: 9, SUPERSENSES: 0).",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Net capacity and contribution of character representation",
"sec_num": "5"
},
{
"text": "Throughout the article we used the default network structure which includes a lower-level bi-LSTM at the character level. However, we hypothesize that the character features are not equally important for all tasks. In fact, if we disable the character features, making the system only depend on word information (cf. Table 5), we observe that two of the tasks (albeit the ones with the overall lowest performance) increase their performance in about 2.5 points, namely MPQA and FRAMES. For the other two tasks we observe drops up to a maximum of 8-points for NER. Character embeddings are informative for NER, because they approximate the well-known capitalization features in traditional models. Character features are not informative for tasks that are more dependent on word identity (like FRAMES), but are indeed useful for tasks where parts of the word can be informative, such as POS or NER. Table 5 : Comparison default hierarchical systems using a lower-level bi-LSTM for characters (BL w + c) versus system using only words (w).",
"cite_spans": [],
"ref_spans": [
{
"start": 898,
"end": 905,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Net capacity and contribution of character representation",
"sec_num": "5"
},
{
"text": "Multitask learning has been recently explored by a number of studies, including name error recog-nition (Cheng et al., 2015) , tagging and chunking (Collobert et al., 2011; , entity and relation extraction (Gupta et al., 2016) , machine translation (Luong et al., 2016) and machine translation quality estimation including modeling annotator bias (Cohn and Specia, 2013; Shah and Specia, 2016) . Most earlier work had in common that it assumed jointly labeled data (same corpus annotated with multiple labels). In contrast, in this paper we evaluate multitask training from distinct sources to address data paucity, like done recently (Kshirsagar et al., 2015; Braud et al., 2016; Plank, 2016) . Sutton et al. (2007) demonstrate improvements for POS tagging by training a joint CRF model for both POS tagging and noun-phrase chunking. However, it is not clear under what conditions multi-task learning works. In fact, Collobert et al. (2011) train a joint feedforward neural network for POS, chunks and NER, and observe only improvements in chunking (similar to our findings, cf. Section 4.2), however, did not investigate data properties of these tasks.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Cheng et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 148,
"end": 172,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 206,
"end": 226,
"text": "(Gupta et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 249,
"end": 269,
"text": "(Luong et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 347,
"end": 370,
"text": "(Cohn and Specia, 2013;",
"ref_id": "BIBREF9"
},
{
"start": 371,
"end": 393,
"text": "Shah and Specia, 2016)",
"ref_id": "BIBREF34"
},
{
"start": 635,
"end": 660,
"text": "(Kshirsagar et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 661,
"end": 680,
"text": "Braud et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 681,
"end": 693,
"text": "Plank, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 696,
"end": 716,
"text": "Sutton et al. (2007)",
"ref_id": "BIBREF36"
},
{
"start": 918,
"end": 941,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "To the best of our knowledge, this is the first extensive evaluation of the effect of data properties and main-auxiliary task interplay in MTL for semantic sequence tasks. The most related work is Luong et al. (2016) , who focus on the effect of auxiliary data size (constituency parsing) on the main task (machine translation), finding that large amounts of auxiliary data swamp the learning of the main task. Earlier work related to MTL is the study by Ando and Zhang (2005) who learn many auxiliary task from unlabeled data to aid morphosyntactic tasks.",
"cite_spans": [
{
"start": 197,
"end": 216,
"text": "Luong et al. (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We have examined the data-conditioned behavior of our MTL setup from three perspectives. First, we have tested three variants of FREQBIN showing that our novel parametric UNIFORM variant outperforms the previously used SKEWED 10 , which has a number of labels determined by the corpus size. Second, we examined main-auxiliary task combinations for five semantic tasks and up to two lower-level tasks. We observe that the best auxiliary task is either FREQBIN or FRE-QBIN+POS, which have low kurtosis and fairly high entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "We also explored three sources of POS data as auxiliary task, differing in corpus composition or label inventory. We observe that the UPOS variant is the most effective auxiliary task for the evaluated architecture. Indeed, UPOS has fewer labels, and also a more compact distribution with lower kurtosis than its PTB counterpart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "While we propose a better variant of FREQBIN (UNIFORM) we conclude that it is not a useful auxiliary task in the general case. Rather, it helps predict low-frequency labels in scenarios where the main task is already very co-informative of word frequency. While log frequency lends itself naturally to a continuous representation so that we could use regression to predict it instead of classification, doing so would require a change of the architecture and, most importantly, the joint loss. Moreover, discretized frequency distributions allow us to interpret them in terms of entropy. Thus, we leave it to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "When comparing system performance to data properties, we determine the architecture's preference for compact, mid-entropy distributions what are not very skewed, i.e., have low kurtosis. This preference explains why the system fares consistently well for a lot of POS experiments but falls short when used for task with many labels or with a very large O majority class. Regarding output layer choice, we have not found a systematic preference for inner or outer-layer predictions for an auxiliary task, as the results are often very close.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "We argue strongly that the difficulty of semantic sequence predictions can be addressed as a matter of data properties and not as the antagonic truism that morphosyntax is easy and semantics is hard. The underlying problems of semantic task prediction have often to do with the skewedness of the data, associated often to the preponderance of the O-class, and a possible detachment from mainly lexical prediction, such as the spans of MPQA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "This paper is only one step towards better understanding of MTL. It is necessarily incomplete, we hope to span more work in this direction. For instance, the system evaluated in this study has no Viterbi-style decoding for sequences. We hypothesize that such extension of the model would improve prediction of labels with strong interdependency, such as BIO-span labels, in particular for small datasets or large label inventories, albeit we found the current system predicting fewer invalid sequences than expected. In future, we would like to extend this work in several directions: comparing different MTL architectures, additional tasks, loss weighting, and comparing the change of performance between a label set used as an auxiliary task or as a-predicted-feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Available at: https://github.com/bplank/ multitasksemantics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at: https://github.com/bplank/ bilstm-aux",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example,AdamTrainer or MomentumSGDTrainer in pycnn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Number of epochs is among the most influential parameters of the system. Adding more layers did not further improve results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their feedback. Barbara Plank thanks the Center for Information Technology of the University of Groningen for the HPC cluster and Nvidia corporation for supporting her research. H\u00e9ctor Mart\u00ednez Alonso is funded by the French DGA project VerDi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A framework for learning predictive structures from multiple tasks and unlabeled data",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Kubota",
"suffix": ""
},
{
"first": "Ando",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "1817--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando and Tong Zhang. 2005. A frame- work for learning predictive structures from multi- ple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817-1853.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Berkeley FrameNet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improved Transition-based Parsing by Modeling Characters instead of Words with LSTMs",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved Transition-based Parsing by Mod- eling Characters instead of Words with LSTMs. In EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semantic tagging with deep residual networks",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic tagging with deep residual networks. In COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multi-view and multi-task training of rst discourse parsers",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Barbara Plank, and Anders S\u00f8gaard. 2016. Multi-view and multi-task training of rst dis- course parsers. In COLING.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multitask learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "Learning to learn",
"volume": "",
"issue": "",
"pages": "95--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. In Learning to learn, pages 95-133. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Open-Domain Name Error Detection using a Multi-Task RNN",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-Domain Name Error Detection using a Multi- Task RNN. In EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural Language Understanding with Distributed Representation. ArXiv",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho. 2015. Natural Language Under- standing with Distributed Representation. ArXiv, abs/1511.07916.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and informa- tion extraction with a supersense sequence tagger. In EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modelling annotator bias with multi-task gaussian processes: An application to machine translation quality estimation",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Lucia Specia. 2013. Modelling anno- tator bias with multi-task gaussian processes: An ap- plication to machine translation quality estimation. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexical semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Cruse",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Alan Cruse. 1986. Lexical semantics. Cambridge University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Frame-semantic parsing",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational linguistics",
"volume": "40",
"issue": "1",
"pages": "9--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das, Desai Chen, Andr\u00e9 FT Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational linguistics, 40(1):9-56.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mpqa 3.0: An entity/event-level sentiment corpus",
"authors": [
{
"first": "Lingjia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2015,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingjia Deng and Janyce Wiebe. 2015. Mpqa 3.0: An entity/event-level sentiment corpus. In NAACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Transition-Based Dependency Parsing with Stack Long Short-Term Memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- Based Dependency Parsing with Stack Long Short- Term Memory. In ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Primer on Neural Network Models for Natural Language Processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2015. A Primer on Neural Network Models for Natural Language Processing. ArXiv, abs/1510.00726.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional LSTM and other neural network architectures. Neu- ral Networks, 18(5):602-610.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Andrassy",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta, Hinrich Sch\u00fctze, and Bernt Andrassy. 2016. Table Filling Multi-Task Recurrent Neural Network for Joint Entity and Relation Extraction. In COLING.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic frame identification with distributed word representations",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame iden- tification with distributed word representations. In ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations. TACL",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and Accurate Dependency Parsing Using Bidi- rectional LSTM Feature Representations. TACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Frame-semantic role labeling with heterogeneous annotations",
"authors": [
{
"first": "Meghana",
"middle": [],
"last": "Kshirsagar",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meghana Kshirsagar, Sam Thomson, Nathan Schnei- der, Jaime Carbonell, Noah A. Smith, and Chris Dyer. 2015. Frame-semantic role labeling with het- erogeneous annotations. In ACL-IJCNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recog- nition. In NAACL-HLT.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Finegrained opinion mining with recurrent neural networks and word embeddings",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Fine- grained opinion mining with recurrent neural net- works and word embeddings. In EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multi-task sequence to sequence learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. In ICLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "End-to-end Sequence Labeling via",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01354"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end Se- quence Labeling via Bi-directional LSTM-CNNs- CRF. arXiv preprint arXiv:1603.01354.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Building a Large Annotated Corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Comput. Linguist",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a Large Anno- tated Corpus of English: The Penn Treebank. Com- put. Linguist., 19(2):313-330.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A semantic concordance",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"T"
],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proceedings of the workshop on Human Language Technology.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal depen- dencies v1: A multilingual treebank collection. In LREC.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Keystroke dynamics as signal for shallow syntactic parsing",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank. 2016. Keystroke dynamics as signal for shallow syntactic parsing. In COLING.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The file drawer problem and tolerance for null results",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rosenthal",
"suffix": ""
}
],
"year": 1979,
"venue": "Psychological bulletin",
"volume": "86",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Rosenthal. 1979. The file drawer problem and tolerance for null results. Psychological bulletin, 86(3):638.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Large-scale multitask learning for machine translation quality estimation",
"authors": [
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kashif Shah and Lucia Specia. 2016. Large-scale mul- titask learning for machine translation quality esti- mation. In NAACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Deep multi-task learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Khashayar",
"middle": [],
"last": "Rohanimanesh",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Machine Learning Research",
"volume": "8",
"issue": "",
"pages": "693--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. Journal of Machine Learning Research, 8(Mar):693-723.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In HLT-NAACL, pages 142-147. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Adriana Roventini, Francesca Bertagna, Antonietta Alonge, and Wim Peters",
"authors": [
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Bloksma",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Salvador",
"middle": [],
"last": "Climent",
"suffix": ""
},
{
"first": "Nicoletta",
"middle": [],
"last": "Calzolari",
"suffix": ""
}
],
"year": 1998,
"venue": "The eurowordnet base concepts and top ontology. Deliverable D017 D",
"volume": "34",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piek Vossen, Laura Bloksma, Horacio Rodriguez, Sal- vador Climent, Nicoletta Calzolari, Adriana Roven- tini, Francesca Bertagna, Antonietta Alonge, and Wim Peters. 1998. The eurowordnet base concepts and top ontology. Deliverable D017 D, 34:D036.",
"links": null
}
},
"ref_entries": {
"TABREF3": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: Comparison different POS variants (data</td></tr><tr><td>source/tag granularity): Baseline (BL) and the dif-</td></tr><tr><td>ference in performance on the +POS system when</td></tr><tr><td>using the UD Corpus with UPOS (UD/UPOS) or</td></tr><tr><td>with PTB tabs (UD/PTB), as well as the Wall</td></tr><tr><td>Street Journal with PTB tags (WSJ/PTB).</td></tr></table>",
"html": null,
"num": null
}
}
}
}