Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W17-0225",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:22:06.937116Z"
},
"title": "Will my auxiliary tagging task help? Estimating Auxiliary Tasks Effectivity in Multi-Task Learning",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"country": "The Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multitask learning often improves system performance for morphosyntactic and semantic tagging tasks. However, the question of when and why this is the case has yet to be answered satisfactorily. Although previous work has hypothesised that this is linked to the label distributions of the auxiliary task, we argue that this is not sufficient. We show that information-theoretic measures which consider the joint label distributions of the main and auxiliary tasks offer far more explanatory value. Our findings are empirically supported by experiments for morphosyntactic tasks on 39 languages, and are in line with findings in the literature for several semantic tasks.",
"pdf_parse": {
"paper_id": "W17-0225",
"_pdf_hash": "",
"abstract": [
{
"text": "Multitask learning often improves system performance for morphosyntactic and semantic tagging tasks. However, the question of when and why this is the case has yet to be answered satisfactorily. Although previous work has hypothesised that this is linked to the label distributions of the auxiliary task, we argue that this is not sufficient. We show that information-theoretic measures which consider the joint label distributions of the main and auxiliary tasks offer far more explanatory value. Our findings are empirically supported by experiments for morphosyntactic tasks on 39 languages, and are in line with findings in the literature for several semantic tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When attempting to solve a natural language processing (NLP) task, one can consider the fact that many such tasks are highly related to one another. A common way of taking advantage of this is to apply multitask learning (MTL, Caruana (1998) ). MTL has been successfully applied to many linguistic sequence-prediction tasks, both syntactic and semantic in nature (Collobert and Weston, 2008; Cheng et al., 2015; Mart\u00ednez Alonso and Plank, 2016; Bjerva et al., 2016; Ammar et al., 2016; . It is, however, unclear when an auxiliary task is useful, although previous work has provided some insights (Caruana, 1998; Mart\u00ednez Alonso and Plank, 2016) .",
"cite_spans": [
{
"start": 221,
"end": 241,
"text": "(MTL, Caruana (1998)",
"ref_id": null
},
{
"start": 363,
"end": 391,
"text": "(Collobert and Weston, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 392,
"end": 411,
"text": "Cheng et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 412,
"end": 444,
"text": "Mart\u00ednez Alonso and Plank, 2016;",
"ref_id": "BIBREF12"
},
{
"start": 445,
"end": 465,
"text": "Bjerva et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 466,
"end": 485,
"text": "Ammar et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 596,
"end": 611,
"text": "(Caruana, 1998;",
"ref_id": "BIBREF2"
},
{
"start": 612,
"end": 644,
"text": "Mart\u00ednez Alonso and Plank, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Currently, considerable time and effort need to be employed in order to experimentally investigate the usefulness of any given main task / auxiliary task combination. In this paper we wish to alleviate this process by providing a means to investigating when an auxiliary task is helpful, thus also shedding light on why this is the case. Concretely, we apply information-theoretic measures to a collection of data-and tag sets, investigate correlations between such measures and auxiliary task effectivity, and show that previous hypotheses do not sufficiently explain this interaction. We investigate this both experimentally on a collection of syntactically oriented tasks on 39 languages, and verify our findings by investigating results found in the literature on semantically oriented tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recurrent Neural Networks (RNNs) are at the core of many current approaches to sequence prediction in NLP (Elman, 1990) . A bidirectional RNN is an extension which incorporates both preceding and proceeding contexts in the learning process (Graves and Schmidhuber, 2005) . Recent approaches frequently use either (bi-)LSTMs (Long Short-Term Memory) or (bi-)GRUs (Gated Recurrent Unit), which have the advantage that they can deal with longer input sequences (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) .",
"cite_spans": [
{
"start": 106,
"end": 119,
"text": "(Elman, 1990)",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 270,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 458,
"end": 492,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF10"
},
{
"start": 493,
"end": 512,
"text": "Chung et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Multitask Learning",
"sec_num": "2"
},
{
"text": "The intuition behind MTL is to improve performance by taking advantage of the fact that related tasks will benefit from similar internal representations (Caruana, 1998) . MTL is commonly framed such that all hidden layers are shared, whereas there is one output layer per task. An RNN can thus be trained to solve one main task (e.g. parsing), while also learning some other auxiliary task (e.g. POS tagging).",
"cite_spans": [
{
"start": 153,
"end": 168,
"text": "(Caruana, 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Multitask Learning",
"sec_num": "2"
},
{
"text": "We wish to give an information-theoretic perspective on when an auxiliary task will be useful for a given main task. For this purpose, we introduce some common information-theoretic measures which will be used throughout this work. 1 The entropy of a probability distribution is a measure of its unpredictability. That is to say, high entropy indicates a uniformly distributed tag set, while low entropy indicates a more skewed distribution. Formally, the entropy of a tag set can be defined as",
"cite_spans": [
{
"start": 232,
"end": 233,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(X) = \u2212 \u2211 x\u2208X p(x) log p(x),",
"eq_num": "(1)"
}
],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "where x is a given tag in tag set X. It may be more informative to take the joint probabilities of the main and auxiliary tag sets in question into account, for instance using conditional entropy. Formally, the conditional entropy of a distribution Y given the distribution X is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(Y |X) = \u2211 x\u2208X \u2211 y\u2208Y p(x, y) log p(x) p(x, y) ,",
"eq_num": "(2)"
}
],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "where x and y are all variables in the given distributions, p(x, y) is the joint probability of variable x cooccurring with variable y, and p(x) is the probability of variable x occurring at all. That is to say, if the auxiliary tag of a word is known, this is highly informative when deciding what the main tag should be. The mutual information (MI) of two tag sets is a measure of the amount of information that is obtained of one tag set, given the other tag set. MI can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "I(X;Y ) = \u2211 x\u2208X \u2211 y\u2208Y p(x, y) log p(x, y) p(x) p(y) ,",
"eq_num": "(3)"
}
],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "where x and y are all variables in the given distributions, p(x, y) is the joint probability of variable x cooccurring with variable y, and p(x) is the probability of variable x occurring at all. MI describes how much information is shared between X and Y , and can therefore be considered a measure of 'correlation' between tag sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information-theoretic Measures",
"sec_num": "3"
},
{
"text": "Entropy has in the literature been hypothesised to be related to the usefulness of an auxiliary task (Mart\u00ednez Alonso and . We argue that this explanation is not entirely sufficient. Take, for instance, two tag sets X and X , applied to the same corpus and containing the same tags. Consider the case where the annotations differ in that the labels in every sentence using X have been randomly reordered. The tag distributions in X and X do not change as a result of this operation, hence their entropies will be the same. However, the tags in X are now likely to have a very low correspondence with any sort of natural language signal, hence X is highly unlikely to be a useful auxiliary task for X. Measures taking joint probabilities into account will capture this lack of correlation between X and X . In this work we show that measures such as conditional entropy and MI are much more informative for the effectivity of an auxiliary task than entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Theory and MTL",
"sec_num": "3.1"
},
{
"text": "For our syntactic experiments, we use the Universal Dependencies (UD) treebanks on 39 out of the 40 languages found in version 1.3 (Nivre et al., 2016) . 2 We experiment with POS tagging as a main task, and various dependency relation classification tasks as auxiliary tasks. We also investigate whether our hypothesis fits with recent results in the literature, by applying our informationtheoretic measures to the semantically oriented tasks in Mart\u00ednez Alonso and , as well as the semantic tagging task in Bjerva et al. (2016) . Although calculation of joint probabilities requires jointly labelled data, this issue can be bypassed without losing much accuracy. Assuming that (at least) one of the tasks under consideration can be completed automatically with high accuracy, we find that the estimates of joint probabilities are very close to actual joint probabilities on gold standard data. In this work, we estimate joint probabilities by tagging the auxiliary task data sets with a state-of-the-art POS tagger.",
"cite_spans": [
{
"start": 131,
"end": 151,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 154,
"end": 155,
"text": "2",
"ref_id": null
},
{
"start": 509,
"end": 529,
"text": "Bjerva et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Dependency Relation Classification is the task of predicting the dependency tag (and its direction) for a given token. This is a task that has not received much attention, although it has been shown to be a useful feature for parsing (Ouchi et al., 2014) . We choose to look at several instantiations of this task, as it allows for a controlled setup under a number of conditions for MTL, and since data is available for a large number of typologically varied languages.",
"cite_spans": [
{
"start": 234,
"end": 254,
"text": "(Ouchi et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphosyntactic Tasks",
"sec_num": "4.1"
},
{
"text": "Previous work has suggested various possible instantiations of dependency relation classification labels (Ouchi et al., 2016) . In this work, we use labels designed to range from highly complex and informative, to very basic ones. 3 The labelling schemes used are shown in Table 1 . The systems in the syntactic experiments are trained on main task data (D main ), and on auxiliary task data (D aux ). Generally, the amount of overlap between such pairs of data sets differs, and can roughly be divided into three categories: i) identity; ii) overlap; and iii) disjoint (no overlap between data sets). To ensure that we cover several possible experimental situations, we experiment using all three categories. We generate (D main , D aux ) pairs by splitting each UD training set into three portions. The first and second portions always contain POS labels. In the identity condition, the second portion contains dependency relations. In the overlap condition, the second and final portions contain dependency relations. In the disjoint condition, the final portion contains dependency relations.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Ouchi et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 231,
"end": 232,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Morphosyntactic Tasks",
"sec_num": "4.1"
},
{
"text": "Mart\u00ednez Alonso and Plank (2016) experiment with using, i.a., POS tagging as an auxiliary task, with main tasks based on several semantically oriented tasks: Frame detection/identification, NER, supersense annotation and MPQA. Bjerva et al. (2016) investigate using a semantic tagging task as an auxiliary task for POS tagging. We do not train systems for these data sets. Rather, we directly investigate whether changes in accuracy with the main/auxiliary tasks used in these papers are correctly predicted by any of the informationtheoretic measures under consideration here.",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "Bjerva et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Tasks",
"sec_num": "4.2"
},
{
"text": "We apply a deep neural network with the exact same settings in each syntactic experiment. Our system consists of a two layer deep bi-GRU (100 dimensions per layer), taking an embedded word representation (64 dimensions) as input. We ap-ply dropout (p = 0.4) between each layer in our network (Srivastava et al., 2014) . The output of the final bi-GRU layer, is connected to two output layers -one per task. Both tasks are always weighted equally. Optimisation is done using the Adam algorithm (Kingma and Ba, 2014) , with the categorical cross-entropy loss function. We use a batch size of 100 sentences, training over a maximum of 50 epochs, using early stopping and monitoring validation loss on the main task.",
"cite_spans": [
{
"start": 292,
"end": 317,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 493,
"end": 514,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture and Hyperparameters",
"sec_num": "5.1"
},
{
"text": "We do not use pre-trained embeddings. We also do not use any task-specific features, similarly to Collobert et al. 2011, and we do not optimise any hyperparameters with regard to the task(s) at hand. Although these choices are likely to affect the overall accuracy of our systems negatively, the goal of our experiments is to investigate the effect in change in accuracy when adding an auxiliary task -not accuracy in itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture and Hyperparameters",
"sec_num": "5.1"
},
{
"text": "In the syntactic experiments, we train one system per language, dependency label category, and split condition. For sentences where only one tag set is available, we do not update weights based on the loss for the absent task. Averaged results over all languages and dependency relation instantiations, per category, are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Overview",
"sec_num": "5.2"
},
{
"text": "In order to facilitate the replicability and reproducibility of our results, we take two methodological steps. To ensure replicability, we run all experiments 10 times, in order to mitigate the effect of random processes on our results. 4 To ensure reproducibility, we release a collection including: i) A Docker file containing all code and dependencies required to obtain all data and run our experiments used in this work; and ii) a notebook containing all code for the statistical analyses performed in this work. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicability and Reproducibility",
"sec_num": "5.3"
},
{
"text": "6 Results and Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replicability and Reproducibility",
"sec_num": "5.3"
},
{
"text": "We use Spearman's \u03c1 in order to calculate correlation between auxiliary task effectivity (as measured using \u2206 acc ) and the information-theoretic measures. Following the recommendations in S\u00f8gaard et al. (2014) , we set our p cut-off value to p < 0.0025. Table 2 shows that MI correlates significantly with auxiliary task effectivity in the most commonly used settings (overlap and disjoint). As hypothesised, entropy has no significant correlation with auxiliary task effectivity, whereas conditional entropy offers some explanation. We further observe that these results hold for almost all languages, although the correlation is weaker for some languages, indicating that there are some other effects at play here. We also analyse whether significant differences can be found with respect to whether or not we have a positive \u2206 acc , using a bootstrap sample test with 10,000 iterations. We observe a significant relationship (p < 0.001) for MI. We also observe a significant relationship for conditional entropy (p < 0.001), and again find no significant difference for entropy (p \u2265 0.07). Interestingly, no correlation is found in the identity condition between \u2206 acc and any informationtheoretic measure. This is not surprising, as the most effective auxiliary task is simply more data for a task with the highest possible MI. Hence, in the overlap/disjoint conditions, high MI is highly correlated with \u2206 acc , while in the identity condition, there is no extra data. It is evident that tag set correlations in identical data is not helpful.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "S\u00f8gaard et al. (2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Morphosyntactic Tasks",
"sec_num": "6.1"
},
{
"text": "Although we do not have access to sufficient data points to run statistical analyses on the results obtained by Mart\u00ednez Alonso and Plank (2016), or by Bjerva et al. (2016) , we do observe that the mean MI for the conditions in which an auxiliary task is helpful is higher than in the cases where an auxiliary task is not helpful.",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "Bjerva et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Tasks",
"sec_num": "6.2"
},
{
"text": "We have examined the relation between auxiliary task effectivity and three information-theoretic measures. While previous research hypothesises that entropy plays a central role, we show experimentally that conditional entropy is a better predictor, and MI an even better predictor. This claim is corroborated when we correlate MI and change in accuracy with results found in the literature. It is especially interesting that MI is a better predictor than conditional entropy, since MI does not consider the order between main and auxiliary tasks. Our findings should prove helpful for researchers when considering which auxiliary tasks might be helpful for a given main task. Furthermore, it provides an explanation for the fact that there is no universally effective auxiliary task, as a purely entropy-based hypothesis would predict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The fact that MI is informative when determining the effectivity of an auxiliary task can be explained by considering an auxiliary task to be similar to adding a feature. That is to say, useful features are likely to be useful auxiliary tasks. Interestingly, however, the gains of adding an auxiliary task are visible at test time for the main task, when no explicit auxiliary label information is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We tested our hypothesis on 39 languages, representing a wide typological range, as well as a wide range of data sizes. Our experiments were run on syntactically oriented tasks of various granularities. We also corroborated our findings with results from semantically oriented tasks in the literature. Hence our results generalise both across a range of languages, data sizes, and NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "See Cover and Thomas (2012) for an in-depth overview.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Japanese was excluded due to treebank unavailability.3 Labels are automatically derived from UD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Approximately 10,000 runs using 400,000 CPU hours. 5 https://github.com/bjerva/mtl-cond-entropy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the NWO-VICI grant \"Lost in Translation -Found in Meaning\" (288-89-003). We would like to thank Barbara Plank, Robert\u00d6stling, Johan Sjons, and the anonymous reviewers for their comments on previous versions of this manuscript. We would also like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many lan- guages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic tagging with deep residual networks",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic tagging with deep residual networks. In Proceedings of COLING 2016, page 35313541, Os- aka, Japan.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multitask learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1998. Multitask learning. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Open-domain name error detection using a multitask rnn",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multi- task rnn. In EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th international conference on Machine learning, pages 160-167. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Elements of information theory",
"authors": [
{
"first": "M",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Joy A",
"middle": [],
"last": "Cover",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M Cover and Joy A Thomas. 2012. Elements of information theory. John Wiley & Sons.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Finding structure in time",
"authors": [
{
"first": "",
"middle": [],
"last": "Jeffrey L Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5):602-610.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multitask learning for semantic sequence prediction under varying data conditions",
"authors": [
{
"first": "Alonso",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00e9ctor Mart\u00ednez Alonso and Barbara Plank. 2016. Multitask learning for semantic sequence prediction under varying data conditions. In arXiv preprint, to appear at EACL 2017 (long paper).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependen- cies v1: A multilingual treebank collection. In Pro- ceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving dependency parsers with supertags",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Ouchi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "154--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroki Ouchi, Kevin Duh, and Yuji Matsumoto. 2014. Improving dependency parsers with supertags. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics, volume 2: Short Papers, pages 154-158. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transition-Based Dependency Parsing Exploiting Supertags",
"authors": [
{
"first": "Hiroki",
"middle": [],
"last": "Ouchi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroki Ouchi, Kevin Duh, Hiroyuki Shindo, and Yuji Matsumoto. 2016. Transition-Based Dependency Parsing Exploiting Supertags. In IEEE/ACM Trans- actions on Audio, Speech and Language Processing, volume 24.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of ACL 2016.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep multi-task learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, volume 2, pages 231-235. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Whats in a p-value in NLP?",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2014,
"venue": "CoNLL-2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Martinez. 2014. Whats in a p-value in NLP? In CoNLL-2014.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Dependency relation labels used in this work, with entropy in bytes (H) measured on English. The labels differ in the granularity and/or inclusion of the category and/or directionality."
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Correlation scores and associated p-values, between change in accuracy (\u2206 acc ) and entropy (H(Y )), conditional entropy (H(Y |X)), and mutual information (I(X;Y )), calculated with Spearman's \u03c1, across all languages and label instantiations. Bold indicates the strongest significant correlations."
}
}
}
}