Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:58:15.299695Z"
},
"title": "Very High Accuracy and Fast Dependency Parsing is not a Contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In addition to a high accuracy, short parsing and training times are the most important properties of a parser. However, parsing and training times are still relatively long. To determine why, we analyzed the time usage of a dependency parser. We illustrate that the mapping of the features onto their weights in the support vector machine is the major factor in time complexity. To resolve this problem, we implemented the passive-aggressive perceptron algorithm as a Hash Kernel. The Hash Kernel substantially improves the parsing times and takes into account the features of negative examples built during the training. This has lead to a higher accuracy. We could further increase the parsing and training speed with a parallel feature extraction and a parallel parsing algorithm. We are convinced that the Hash Kernel and the parallelization can be applied successful to other NLP applications as well such as transition based dependency parsers, phrase structrue parsers, and machine translation.",
"pdf_parse": {
"paper_id": "C10-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "In addition to a high accuracy, short parsing and training times are the most important properties of a parser. However, parsing and training times are still relatively long. To determine why, we analyzed the time usage of a dependency parser. We illustrate that the mapping of the features onto their weights in the support vector machine is the major factor in time complexity. To resolve this problem, we implemented the passive-aggressive perceptron algorithm as a Hash Kernel. The Hash Kernel substantially improves the parsing times and takes into account the features of negative examples built during the training. This has lead to a higher accuracy. We could further increase the parsing and training speed with a parallel feature extraction and a parallel parsing algorithm. We are convinced that the Hash Kernel and the parallelization can be applied successful to other NLP applications as well such as transition based dependency parsers, phrase structrue parsers, and machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Highly accurate dependency parsers have high demands on resources and long parsing times. The training of a parser frequently takes several days and the parsing of a sentence can take on average up to a minute. The parsing time usage is important for many applications. For instance, dialog systems only have a few hundred milliseconds to analyze a sentence and machine translation systems, have to consider in that time some thousand translation alternatives for the translation of a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Parsing and training times can be improved by methods that maintain the accuracy level, or methods that trade accuracy against better parsing times. Software developers and researchers are usually unwilling to reduce the quality of their applications. Consequently, we have to consider at first methods to improve a parser, which do not involve an accuracy loss, such as faster algorithms, faster implementation of algorithms, parallel algorithms that use several CPU cores, and feature selection that eliminates the features that do not improve accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We employ, as a basis for our parser, the second order maximum spanning tree dependency parsing algorithm of Carreras (2007) . This algorithm frequently reaches very good, or even the best labeled attachment scores, and was one of the most used parsing algorithms in the shared task 2009 of the Conference on Natural Language Learning (CoNLL) (Haji\u010d et al., 2009) . We combined this parsing algorithm with the passive-aggressive perceptron algorithm (Crammer et al., 2003; McDonald et al., 2005; Crammer et al., 2006) . A parser build out of these two algorithms provides a good baseline and starting point to improve upon the parsing and training times.",
"cite_spans": [
{
"start": 109,
"end": 124,
"text": "Carreras (2007)",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 363,
"text": "(Haji\u010d et al., 2009)",
"ref_id": null
},
{
"start": 450,
"end": 472,
"text": "(Crammer et al., 2003;",
"ref_id": "BIBREF17"
},
{
"start": 473,
"end": 495,
"text": "McDonald et al., 2005;",
"ref_id": "BIBREF27"
},
{
"start": 496,
"end": 517,
"text": "Crammer et al., 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows. In Section 2, we describe related work. In section 3, we analyze the time usage of the components of the parser. In Section 4, we introduce a new Kernel that resolves some of the bottlenecks and improves the performance. In Section 5, we describe the parallel parsing algorithms which nearly allowed us to divide the parsing times by the number of cores. In Section 6, we determine the optimal setting for the Non-Projective Approximation Algorithm. In Section 7, we conclude with a summary and an outline of further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The two main approaches to dependency parsing are transition based dependency parsing (Nivre, 2003; Yamada and Matsumoto., 2003; Titov and Henderson, 2007) and maximum spanning tree based dependency parsing (Eisner, 1996; Eisner, 2000; McDonald and Pereira, 2006) . Transition based parsers typically have a linear or quadratic complexity (Nivre et al., 2004; Attardi, 2006) . Nivre (2009) introduced a transition based nonprojective parsing algorithm that has a worst case quadratic complexity and an expected linear parsing time. Titov and Henderson (2007) combined a transition based parsing algorithm, which used a beam search with a latent variable machine learning technique.",
"cite_spans": [
{
"start": 86,
"end": 99,
"text": "(Nivre, 2003;",
"ref_id": "BIBREF30"
},
{
"start": 100,
"end": 128,
"text": "Yamada and Matsumoto., 2003;",
"ref_id": "BIBREF37"
},
{
"start": 129,
"end": 155,
"text": "Titov and Henderson, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 207,
"end": 221,
"text": "(Eisner, 1996;",
"ref_id": "BIBREF19"
},
{
"start": 222,
"end": 235,
"text": "Eisner, 2000;",
"ref_id": "BIBREF20"
},
{
"start": 236,
"end": 263,
"text": "McDonald and Pereira, 2006)",
"ref_id": "BIBREF26"
},
{
"start": 339,
"end": 359,
"text": "(Nivre et al., 2004;",
"ref_id": "BIBREF29"
},
{
"start": 360,
"end": 374,
"text": "Attardi, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 377,
"end": 389,
"text": "Nivre (2009)",
"ref_id": "BIBREF31"
},
{
"start": 532,
"end": 558,
"text": "Titov and Henderson (2007)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Maximum spanning tree dependency based parsers decomposes a dependency structure into parts known as \"factors\". The factors of the first order maximum spanning tree parsing algorithm are edges consisting of the head, the dependent (child) and the edge label. This algorithm has a quadratic complexity. The second order parsing algorithm of McDonald and Pereira (2006) uses a separate algorithm for edge labeling. This algorithm uses in addition to the first order factors: the edges to those children which are closest to the dependent. The second order algorithm of Carreras (2007) uses in addition to McDonald and Pereira (2006) the child of the dependent occurring in the sentence between the head and the dependent, and the an edge to a grandchild. The edge labeling is an integral part of the algorithm which requires an additional loop over the labels. This algorithm therefore has a complexity of O(n 4 ). Johansson and Nugues (2008) reduced the needed number of loops over the edge labels by using only the edges that existed in the training corpus for a distinct head and child part-of-speech tag combination.",
"cite_spans": [
{
"start": 340,
"end": 367,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF26"
},
{
"start": 567,
"end": 582,
"text": "Carreras (2007)",
"ref_id": "BIBREF14"
},
{
"start": 603,
"end": 630,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF26"
},
{
"start": 913,
"end": 940,
"text": "Johansson and Nugues (2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The transition based parsers have a lower complexity. Nevertheless, the reported run times in the last shared tasks were similar to the maximum spanning tree parsers. For a transition based parser, Gesmundo et al. (2009) reported run times between 2.2 days for English and 4.7 days for Czech for the joint training of syntactic and semantic dependencies. The parsing times were about one word per second, which speeds up quickly with a smaller beam-size, although the accuracy of the parser degrades a bit. Johansson and Nugues (2008) reported training times of 2.4 days for English with the high-order parsing algorithm of Carreras (2007) .",
"cite_spans": [
{
"start": 198,
"end": 220,
"text": "Gesmundo et al. (2009)",
"ref_id": "BIBREF21"
},
{
"start": 507,
"end": 534,
"text": "Johansson and Nugues (2008)",
"ref_id": "BIBREF25"
},
{
"start": 624,
"end": 639,
"text": "Carreras (2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We built a baseline parser to measure the time usage. The baseline parser resembles the architecture of McDonald and Pereira (2006) . It consists of the second order parsing algorithm of Carreras (2007) , the non-projective approximation algorithm (McDonald and Pereira, 2006) , the passiveaggressive support vector machine, and a feature extraction component. The features are listed in Table 4 . As in McDonald et al. (2005) , the parser stores the features of each training example in a file. In each epoch of the training, the feature file is read, and the weights are calculated and stored in an array. This procedure is up to 5 times faster than computing the features each time anew. But the parser has to maintain large arrays: for the weights of the sentence and the training file. Therefore, the parser needs 3GB of main memory for English and 100GB of disc space for the training file. The parsing time is approximately 20% faster, since some of the values did not have to be recalculated.",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF26"
},
{
"start": 187,
"end": 202,
"text": "Carreras (2007)",
"ref_id": "BIBREF14"
},
{
"start": 248,
"end": 276,
"text": "(McDonald and Pereira, 2006)",
"ref_id": "BIBREF26"
},
{
"start": 404,
"end": 426,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "Algorithm 1 illustrates the training algorithm in pseudo code. \u03c4 is the set of training examples where an example is a pair (x i , y i ) of a sentence and the corresponding dependency structure. \u2212 \u2192 w and \u2212 \u2192 v are weight vectors. Table 1 : t e+s is the elapsed time in milliseconds to extract and store the features, t r to read the features and to calculate the weight arrays, t p to predict the projective parse tree, t a to apply the non-projective approximation algorithm, rest is the time to conduct the other parts such as the update function, train. is the total training time per instance (t r + t p + t a +rest ), and t e is the elapsed time to extract the features. The next columns illustrate the parsing time in milliseconds per sentence for the test set, training time in hours, the number of sentences in the training set, the total number of features in million, the labeled attachment score of the test set, and the unlabeled attachment score.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "Algorithm 1: Training -baseline algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "\u03c4 = {(xi, yi)} I i=1 // Training data \u2212 \u2192 w = 0, \u2212 \u2192 v = 0 \u03b3 = E * I // passive-aggresive update weight for i = 1 to I t s s+e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "; extract-and-store-features(xi); t e s+e ; for n = 1 to E // iteration over the training epochs for i = 1 to I // iteration over the training examples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "k \u2190 (n \u2212 1) * I + i \u03b3 = E * I \u2212 k + 2 // passive-aggressive weight t s r,k ; A = read-features-and-calc-arrays(i, \u2212 \u2192 w ) ; t e r,k t s p,k ; yp = predicte-projective-parse-tree(A);t e p,k t s a,k ; ya = non-projective-approx.(yp,A); t e a,k update \u2212 \u2192 w , \u2212 \u2192 v according to \u2206(yp, yi) and \u03b3 w = v/(E * I) // average dren \u03c6 h,d,g where h, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": ", g, and s are the indexes of the words included in x i . Finally, the method stores the feature vectors on the hard disc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "The next two loops build the main part of the training algorithm. The outer loop iterates over the number of training epochs, while the inner loop iterates over all training examples. The online training algorithm considers a single training example in each iteration. The first function in the loop reads the features and computes the weights A for the factors in the sentence x i . A is a set of weight arrays.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "A = { \u2212 \u2192 w * \u2212 \u2192 f h,d , \u2212 \u2192 w * \u2212 \u2192 f h,d,s , \u2212 \u2192 w * \u2212 \u2192 f h,d,g }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "The parsing algorithm uses the weight arrays to predict a projective dependency structure y p . The non-projective approximation algorithm has as input the dependency structure and the weight arrays. It rearranges the edges and tries to increase the total score of the dependency structure. This algorithm builds a dependency structure y a , which might be non-projective. The training al-gorithm updates \u2212 \u2192 w according to the difference between the predicted dependency structures y a and the reference structure y i . It updates \u2212 \u2192 v as well, whereby the algorithm additionally weights the updates by \u03b3. Since the algorithm decreases \u03b3 in each round, the algorithm adapts the weights more aggressively at the beginning (Crammer et al., 2006) . After all iterations, the algorithm computes the average of \u2212 \u2192 v , which reduces the effect of overfitting (Collins, 2002) .",
"cite_spans": [
{
"start": 723,
"end": 745,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF18"
},
{
"start": 856,
"end": 871,
"text": "(Collins, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "We have inserted into the training algorithm functions to measure the start times t s and the end times t e for the procedures to compute and store the features, to read the features, to predict the projective parse, and to calculate the nonprojective approximation. We calculate the average elapsed time per instance, as the average over all training examples and epochs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "t x = E * I k=1 t e x,k \u2212t s x,k E * I .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "We use the training set and the test set of the CoNLL shared task 2009 for our experiments. Table 1 shows the elapsed times in 1 1000 seconds (milliseconds) of the selected languages for the procedure calls in the loops of Algorithm 1. We had to measure the times for the feature extraction in the parsing algorithm, since in the training algorithm, the time can only be measured together with the time for storing the features. The table contains additional figures for the total training time and parsing scores. 1 The parsing algorithm itself only required, to our surprise, 12.5 ms (t p ) for a English sentence on average, while the feature extraction needs 1223 ms. To extract the features takes about 100 times longer than to build a projective dependency tree. The feature extraction is already implemented efficiently. It uses only numbers to represent features which it combines to a long integer number and then maps by a hash table 2 to a 32bit integer number. The parsing algorithm uses the integer number as an index to access the weights in the vectors \u2212 \u2192 w and \u2212 \u2192 v .",
"cite_spans": [
{
"start": 515,
"end": 516,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "The complexity of the parsing algorithm is usually considered the reason for long parsing times. However, it is not the most time consuming component as proven by the above analysis. Therefore, we investigated the question further, asking what causes the high time consumption of the feature extraction?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "In our next experiment, we left out the mapping of the features to the index of the weight vectors. The feature extraction takes 88 ms/sentence without the mapping and 1223 ms/sentence with the mapping. The feature-index mapping needs 93% of the time to extract the features and 91% of the total parsing time. What causes the high time consumption of the feature-index mapping?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "The mapping has to provide a number as an index for the features in the training examples and to filter out the features of examples built, while the parser predicts the dependency structures. The algorithm filters out negative features to reduce the memory requirement, even if they could improve the parsing result. We will call the features built due to the training examples positive features and the rest negative features. We counted 5.8 times more access to negative features than positive features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "We now look more into the implementation details of the used hash table to answer the previously asked question. The hash table for the feature-index mapping uses three arrays: one for the keys, one for the values and a status array to indicate the deleted elements. If a program stores a value then the hash function uses the key to calculate the location of the value. Since the hash function is a heuristic function, the predicted location might be wrong, which leads to so-called 2 We use the hash tables of the trove library:",
"cite_spans": [
{
"start": 484,
"end": 485,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "http://sourceforge.net/projects/trove4j. hash misses. In such cases the hash algorithm has to retry to find the value. We counted 87% hash misses including misses where the hash had to retry several times. The number of hash misses was high, because of the additional negative features. The CPU cache can only store a small amount of the data from the hash table. Therefore, the memory controller has frequently to transfer data from the main memory into the CPU. This procedure is relatively slow. We traced down the high time consumption to the access of the key and the access of the value. Successive accesses to the arrays are fast, but the relative random accesses via the hash function are very slow. The large number of accesses to the three arrays, because of the negative features, positive features and because of the hash misses multiplied by the time needed to transfer the data into the CPU are the reason for the high time consumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "We tried to solve this problem with Bloom filters, larger hash tables and customized hash functions to reduce the hash misses. These techniques did not help much. However, a substantial improvement did result when we eliminated the hash table completely, and directly accessed the weight vectors \u2212 \u2192 w and \u2212 \u2192 v with a hash function. This led us to the use of Hash Kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Time Usage",
"sec_num": "3"
},
{
"text": "A Hash Kernel for structured data uses a hash function h : J \u2192 {1...n} to index \u03c6, cf. Shi et al. (2009) . \u03c6 maps the observations X to a feature space. We define \u03c6(x, y) as the numeric feature representation indexed by J. Let \u03c6 k (x, y) = \u03c6 j (x, y) the hash based feature-index mapping, where h(j) = k. The process of parsing a sentence x i is to find a parse tree y p that maximizes a scoring function argmax y F (x i , y). The learning problem is to fit the function F so that the errors of the predicted parse tree y are as low as possible. The scoring function of the Hash Kernel is",
"cite_spans": [
{
"start": 87,
"end": 104,
"text": "Shi et al. (2009)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "F (x, y) = \u2212 \u2192 w * \u03c6(x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "where \u2212 \u2192 w is the weight vector and the size of \u2212 \u2192 w is n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "Algorithm 2 shows the update function of the Hash Kernel. We derived the update function from the update function of MIRA (Crammer et Algorithm 2: Update of the Hash Kernel , 2006) . The parameters of the function are the weight vectors \u2212 \u2192 w and \u2212 \u2192 v , the sentence x i , the gold dependency structure y i , the predicted dependency structure y p , and the update weight \u03b3. The function \u2206 calculates the number of wrong labeled edges. The update function updates the weight vectors, if at least one edge is labeled wrong. It calculates the difference \u2212 \u2192 u of the feature vectors of the gold dependency structure \u03c6(x i , y i ) and the predicted dependency structure \u03c6(x i , y p ). Each time, we use the feature representation \u03c6, the hash function h maps the features to integer numbers between 1 and | \u2212 \u2192 w |. After that the update function calculates the margin \u03bd and updates \u2212 \u2192 w and \u2212 \u2192 v respectively. Algorithm 3 shows the training algorithm for the Hash Kernel in pseudo code. A main difference to the baseline algorithm is that it does not store the features because of the required time which is needed to store the additional negative features. Accordingly, the algorithm first extracts the features for each training instance, then maps the features to indexes for the weight vector with the hash function and calculates the weight arrays. For different j, the hash function h(j) might generate the same value k. This means that the hash function maps more than one feature to the same weight. We call such cases collisions. Collisions can reduce the accuracy, since the weights are changed arbitrarily. This procedure is similar to randomization of weights (features), which aims to save space by sharing values in the weight vector (Blum., 2006; Rahimi and Recht, 2008) . The Hash Kernel shares values when collisions occur that can be considered as an approximation of the kernel function, because a weight might be adapted due to more than one feature. If the approximation works well then we would need only a relatively small weight vector otherwise we need a larger weight vector to reduce the chance of collisions. In an experiments, we compared two hash functions and different hash sizes. We selected for the comparison a standard hash function (h 1 ) and a custom hash function (h 2 ). The idea for the custom hash function h 2 is not to overlap the values of the feature sequence number and the edge label with other values. These values are stored at the beginning of a long number, which represents a feature. Table 2 shows the labeled attachment scores for selected weight vector sizes and the number of nonzero weights. Most of the numbers in Table 2 are primes, since they are frequently used to obtain a better distribution of the content in hash ta-bles. h 2 has more nonzero weights than h 1 . Nevertheless, we did not observe any clear improvement of the accuracy scores. The values do not change significantly for a weight vector size of 10 million and more elements. We choose a weight vector size of 115911564 values for further experiments since we get more non zero weights and therefore fewer collisions. Table 3 : The time in milliseconds for the feature extraction, projective parsing, non-projective approximation, rest (r), the total training time per instance, the average parsing (par.) time in milliseconds for the test set and the training time in hours Figure 1 : The difference of the labeled attachment score between the baseline parser and the parser with the Hash Kernel (y-axis) for increasing large training sets (x-axis). Table 3 contains the measured times for the Hash Kernel as used in Algorithm 2. The parser needs 0.354 seconds in average to parse a sentence of the English test set. This is 3.5 times faster than the baseline parser. The reason for that is the faster feature mapping of the Hash Kernel. Therefore, the measured time t e for the feature extraction and the calculation of the weight arrays are much lower than for the baseline parser. The training is about 19% slower since we could no longer use a file to store the feature indexes of the training examples because of the large number of negative features. We counted about twice the number of nonzero weights in the weight vector of the Hash Kernel compared to the baseline parser. For instance, we counted for English 17.34 Millions nonzero weights in the Hash Kernel and 8.47 Millions in baseline parser and for Chinese 18.28 Millions nonzero weights in the Hash Kernel and 8.76 Millions in the baseline parser. Table 6 shows the scores for all languages of the shared task 2009. The attachment scores increased for all languages. It increased most for Catalan and Spanish. These two corpora have the smallest training sets. We searched for the reason and found that the Hash Kernel provides an overproportional accuracy gain with less training data compared to MIRA. Figure 1 shows the difference between the labeled attachment score of the parser with MIRA and the Hash Kernel for Spanish. The decreasing curve shows clearly that the Hash Kernel provides an overproportional accuracy gain with less training data compared to the baseline. This provides an advantage for small training corpora.",
"cite_spans": [
{
"start": 122,
"end": 133,
"text": "(Crammer et",
"ref_id": null
},
{
"start": 173,
"end": 180,
"text": ", 2006)",
"ref_id": null
},
{
"start": 1748,
"end": 1761,
"text": "(Blum., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 1762,
"end": 1785,
"text": "Rahimi and Recht, 2008)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 2538,
"end": 2545,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 2673,
"end": 2681,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 3147,
"end": 3154,
"text": "Table 3",
"ref_id": null
},
{
"start": 3404,
"end": 3412,
"text": "Figure 1",
"ref_id": null
},
{
"start": 3580,
"end": 3587,
"text": "Table 3",
"ref_id": null
},
{
"start": 4545,
"end": 4552,
"text": "Table 6",
"ref_id": null
},
{
"start": 4901,
"end": 4909,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "// yp = arg maxyF (xi, y) update( \u2212 \u2192 w , \u2212 \u2192 v , xi, yi, yp, \u03b3) \u01eb = \u2206(yi, yp) // number of wrong labeled edges if \u01eb > 0 then \u2212 \u2192 u \u2190 (\u03c6(xi, yi) \u2212 \u03c6(xi, yp)) \u03bd = \u01eb\u2212(F (xt,y i )\u2212F (x i ,yp)) || \u2212 \u2192 u || 2 \u2212 \u2192 w \u2190 \u2212 \u2192 w + \u03bd * \u2212 \u2192 u \u2212 \u2192 v \u2190 v + \u03b3 * \u03bd * \u2212 \u2192 u return \u2212 \u2192 w , \u2212 \u2192 v al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "h1 \u2190 |(l xor(l \u2228 0xffffffff00000000 >> 32))% size| 3 h2 \u2190 |(l xor ((l >> 13) \u2228 0xffffffffffffe000) xor ((l >> 24) \u2228 0xffffffffffff0000) xor ((l >> 33) \u2228 0xfffffffffffc0000) xor ((l >> 40) \u2228 0xfffffffffff00000)) %",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "However, this is probably not the main reason for the high improvement, since for languages with only slightly larger training sets such as Chinese the improvement is much lower and the gradient at the end of the curve is so that a huge amount of training data would be needed to make the curve reach zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash Kernel",
"sec_num": "4"
},
{
"text": "Current CPUs have up to 12 cores and we will see soon CPUs with more cores. Also graphic cards provide many simple cores. Parsing algorithms can use several cores. Especially, the tasks to extract the features and to calculate the weight arrays can be well implemented as parallel algorithm. We could also successful parallelize the projective parsing and the non-projective approximation algorithm. Algorithm 4 shows the parallel feature extraction in pseudo code. The main method prepares a list of tasks which can be performed in parallel and afterwards it creates the threads that perform the tasks. Each thread removes from the task list an element, carries out the task and stores the result. This procedure is repeated until the list is empty. The main method waits until all threads are completed and returns the result. For the parallel algorithms, Table 5 shows the elapsed times depend on the number of d(x,y,[,z] ) the order of words, and r(x,y) the distance.",
"cite_spans": [],
"ref_spans": [
{
"start": 858,
"end": 865,
"text": "Table 5",
"ref_id": null
},
{
"start": 914,
"end": 924,
"text": "d(x,y,[,z]",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parallelization",
"sec_num": "5"
},
{
"text": "used cores. The parsing time is 1.9 times faster on two cores and 3.4 times faster on 4 cores. Hyper threading can improve the parsing times again and we get with hyper threading 4.6 faster parsing times. Hyper threading possibly reduces the overhead of threads, which contains already our single core version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallelization",
"sec_num": "5"
},
{
"text": "A // weight arrays extract-features-and-calc-arrays(xi) data-list \u2190 {} // thread-save data list for w1 \u2190 1 to |xi| for w2 \u2190 1 to |xi| data-list \u2190 data-list \u222a{(w1, w2)} c \u2190 number of CPU cores for t \u2190 1 to c Tt \u2190 create-array-thread(t, xi,data-list) start array-thread Tt// start thread t for t \u2190 1 to c join Tt// wait until thread t is finished A \u2190 A \u222a collect-result(Tt) return A // array-thread Table 5 : Elapsed times in milliseconds for different numbers of cores. The parsing time (pars.) are expressed in milliseconds per sentence and the training (train.) time in hours. The last row shows the times for 8 threads on a 4 core CPU with Hyper-threading. For these experiment, we set the clock speed to 3.46 Ghz in order to have the same clock speed for all experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Algorithm 4: Parallel Feature Extraction",
"sec_num": null
},
{
"text": "For non-projective parsing, we use the Non-Projective Approximation Algorithm of McDonald and Pereira (2006) . The algorithm rearranges edges in a dependency tree when they improve the score. Bohnet (2009) extended the algorithm by a threshold which biases the rearrangement of the edges. With a threshold, it is possible to gain a higher percentage of correct dependency links. We determined a threshold in experiments for Czech, English and German. In the experiment, we use the Hash Kernel and increase the thresh- Table 6 : Top LAS of the CoNLL 2009 of (1) Gesmundo et al. (2009) , (2) Bohnet (2009) , (3) Che et al. (2009) , and (4) Ren et al. (2009) ; LAS of the baseline parser and the parser with Hash Kernel. The numbers in bold face mark the top scores. We used for Catalan, Chinese, Japanese and Spanish the projective parsing algorithm. old at the beginning in small steps by 0.1 and later in larger steps by 0.5 and 1.0. Figure 2 shows the labeled attachment scores for the Czech, English and German development set in relation to the rearrangement threshold. The curves for all languages are a bit volatile. The English curve is rather flat. It increases a bit until about 0.3 and remains relative stable before it slightly decreases. The labeled attachment score for German and Czech increases until 0.3 as well and then both scores start to decrease. For English a threshold between 0.3 and about 2.0 would work well. For German and Czech, a threshold of about 0.3 is the best choice. We selected for all three languages a threshold of 0.3. Figure 2 : English, German, and Czech labeled attachment score (y-axis) for the development set in relation to the rearrangement threshold (x-axis).",
"cite_spans": [
{
"start": 81,
"end": 108,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF26"
},
{
"start": 192,
"end": 205,
"text": "Bohnet (2009)",
"ref_id": "BIBREF13"
},
{
"start": 561,
"end": 583,
"text": "Gesmundo et al. (2009)",
"ref_id": "BIBREF21"
},
{
"start": 590,
"end": 603,
"text": "Bohnet (2009)",
"ref_id": "BIBREF13"
},
{
"start": 610,
"end": 627,
"text": "Che et al. (2009)",
"ref_id": "BIBREF15"
},
{
"start": 638,
"end": 655,
"text": "Ren et al. (2009)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 518,
"end": 525,
"text": "Table 6",
"ref_id": null
},
{
"start": 934,
"end": 942,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1557,
"end": 1565,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-Projective Approximation Threshold",
"sec_num": "6"
},
{
"text": "We have developed a very fast parser with excellent attachment scores. For the languages of the 2009 CoNLL Shared Task, the parser could reach higher accuracy scores on average than the top performing systems. The scores for Catalan, Chinese and Japanese are still lower than the top scores. However, the parser would have ranked second for these languages. For Catalan and Chinese, the top results obtained transition-based parsers. Therefore, the integration of both techniques as in Nivre and McDonald (2008) seems to be very promising. For instance, to improve the accuracy further, more global constrains capturing the subcategorization correct could be integrated as in Riedel and Clarke (2006) . Our faster algorithms may make it feasible to consider further higher order factors.",
"cite_spans": [
{
"start": 486,
"end": 511,
"text": "Nivre and McDonald (2008)",
"ref_id": "BIBREF28"
},
{
"start": 676,
"end": 700,
"text": "Riedel and Clarke (2006)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In this paper, we have investigated possibilities for increasing parsing speed without any accuracy loss. The parsing time is 3.5 times faster on a single CPU core than the baseline parser which has an typical architecture for a maximum spanning tree parser. The improvement is due solely to the Hash Kernel. The Hash Kernel was also a prerequisite for the parallelization of the parser because it requires much less memory bandwidth which is nowadays a bottleneck of parsers and many other applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "By using parallel algorithms, we could further increase the parsing time by a factor of 3.4 on a 4 core CPU and including hyper threading by a factor of 4.6. The parsing speed is 16 times faster for the English test set than the conventional approach. The parser needs only 77 millisecond in average to parse a sentence and the speed will scale with the number of cores that become available in future. To gain even faster parsing times, it may be possible to trade accuracy against speed. In a pilot experiment, we have shown that it is possible to reduce the parsing time in this way to as little as 9 milliseconds. We are convinced that the Hash Kernel can be applied successful to transition based dependency parsers, phrase structure parsers and many other NLP applications. 4 ",
"cite_spans": [
{
"start": 780,
"end": 781,
"text": "4",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "We use a Intel Nehalem i7 CPU 3.33 Ghz. With turbo mode on, the clock speed was 3.46 Ghz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ">> n shifts n bits right, and % is the modulo operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " 4 We provide the Parser and Hash Kernel as open source for download from http://code.google.com/p/mate-tools.",
"cite_spans": [
{
"start": 1,
"end": 2,
"text": "4",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "dp,d(h,d) 17 l,hp,h+1p,d-1p,dp,d(h,d) 47 l,g-1p,gp",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,d f ,dp,d(h,d) 17 l,hp,h+1p,d-1p,dp,d(h,d) 47 l,g-1p,gp,d-1p,dp,d(h,d) 102 l,d l ,sp,d(h,d)\u2295r(h,d)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "dp,d(h,d) 21 l,hp,dp,gp,d(h,d,g) 52 l,gp,hp,h+1p,d(h,d) 59 l,sp,s-1p",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h f ,d f ,dp,d(h,d) 21 l,hp,dp,gp,d(h,d,g) 52 l,gp,hp,h+1p,d(h,d) 59 l,sp,s-1p,hp,d(h,d)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "d(h,d) 24 l,h f ,g f ,d(h,d,g) 55 l,g-1p,gp,h-1p,hp,d(h,d) 62 l,sp",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h f ,d f ,d(h,d) 24 l,h f ,g f ,d(h,d,g) 55 l,g-1p,gp,h-1p,hp,d(h,d) 62 l,sp,s+1p,h-1p,d(h,d)",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "d(h,d) 64 l,sp,s+1p,hp,d(h,d) 78 l,h l ,d(h,d) 27 l,g f ,dp,d(h,d,g) Sibling Features 65 l,s-1p",
"authors": [],
"year": null,
"venue": "-1p,gp,hp",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,g-1p,gp,hp,h+1p,d(h,d) 64 l,sp,s+1p,hp,d(h,d) 78 l,h l ,d(h,d) 27 l,g f ,dp,d(h,d,g) Sibling Features 65 l,s-1p,sp,hp,h+1p,d(h,d)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "dp,d(h,d) 29 l,d f ,gp,d(h,d,g) 31 l,hp,sp,d(h,d)\u2295r(h,d) 67 l,sp,s-1p",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,d l ,dp,d(h,d) 29 l,d f ,gp,d(h,d,g) 31 l,hp,sp,d(h,d)\u2295r(h,d) 67 l,sp,s-1p,dp,d(h,d)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "92 l,dp,gp,d(h,d,g) 33 l,p f ,s f ,d(h,d)\u2295r(h,d) 69 sp,dp",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,dp,d(h,d) 92 l,dp,gp,d(h,d,g) 33 l,p f ,s f ,d(h,d)\u2295r(h,d) 69 sp,dp,d-1p,d(h,d)",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "d) 94 l,g l ,dp,d(h,d,g) 35 l,s f ,pp,d(h,d)\u2295r(h,d) 71 s-1p,sp",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,d l ,hp,dp,d(h,d) 94 l,g l ,dp,d(h,d,g) 35 l,s f ,pp,d(h,d)\u2295r(h,d) 71 s-1p,sp,d-1p,dp,d(h,d)",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "dp,d(h,d) 95 l,h l ,gp,d(h,d,g) 36 l,s f ,dp,d(h,d)\u2295r(h,d) 72 sp",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h l ,d l ,dp,d(h,d) 95 l,h l ,gp,d(h,d,g) 36 l,s f ,dp,d(h,d)\u2295r(h,d) 72 sp,s+1p,dp,d+1p,d(h,d)",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "d) 96 l,d l ,gp,d(h,d,g) 37 l,s f ,dp,d(h,d)\u2295r(h,d) 73 s-1p",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h l ,hp,dp,d(h,d) 96 l,d l ,gp,d(h,d,g) 37 l,s f ,dp,d(h,d)\u2295r(h,d) 73 s-1p,sp,dp,d+1p,d(h,d)",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "hp,d(h,d) 74 l,\u2200dm,\u2200gm,d(h,d) 38 l,d f ,sp,d(h,d)\u2295r(h,d) Special Feature",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h l ,d l ,hp,d(h,d) 74 l,\u2200dm,\u2200gm,d(h,d) 38 l,d f ,sp,d(h,d)\u2295r(h,d) Special Feature",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Features 97 l,h l ,s l ,d(h,d)\u2295r(h,d) 39 \u2200l,hp,dp,xpbetween h",
"authors": [
{
"first": "G",
"middle": [],
"last": "Linear",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l,h l ,d l ,d(h,d) Linear G. Features 97 l,h l ,s l ,d(h,d)\u2295r(h,d) 39 \u2200l,hp,dp,xpbetween h,d 89 l,hp,dp,d(h,d) 42 l,gp,g+1p,dp,d(h,d) 98 l,d l ,s l ,d(h,d)\u2295r(h,d)",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Experiments with a Multilanguage Non-Projective Dependency Parser",
"authors": [
{
"first": "G",
"middle": [],
"last": "Attardi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "166--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attardi, G. 2006. Experiments with a Multilanguage Non-Projective Dependency Parser. In Proceedings of CoNLL, pages 166-170.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Random Projection, Margins, Kernels, and Feature-Selection",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
}
],
"year": 2006,
"venue": "LNCS",
"volume": "",
"issue": "",
"pages": "52--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blum., A. 2006. Random Projection, Margins, Ker- nels, and Feature-Selection. In LNCS, pages 52-68. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient Parsing of Syntactic and Semantic Dependency Structures",
"authors": [
{
"first": "B",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bohnet, B. 2009. Efficient Parsing of Syntactic and Semantic Dependency Structures. In Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Experiments with a Higher-order Projective Dependency Parser",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP/CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carreras, X. 2007. Experiments with a Higher-order Projective Dependency Parser. In EMNLP/CoNLL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multilingual Dependency-based Syntactic and Semantic Parsing",
"authors": [
{
"first": "W",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Che, W., Li Z., Li Y., Guo Y., Qin B., and Liu T. 2009. Multilingual Dependency-based Syntactic and Se- mantic Parsing. In Proceedings of the 13th Confer- ence on Computational Natural Language Learning (CoNLL-2009).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, M. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experi- ments with Perceptron Algorithms. In EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Online Passive-Aggressive Algorithms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Sixteenth Annual Conference on Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crammer, K., O. Dekel, S. Shalev-Shwartz, and Y. Singer. 2003. Online Passive-Aggressive Algo- rithms. In Sixteenth Annual Conference on Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Online Passive-Aggressive Algorithms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crammer, K., O. Dekel, S. Shalev-Shwartz, and Y. Singer. 2006. Online Passive-Aggressive Al- gorithms. Journal of Machine Learning Research, 7:551-585.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Three New Probabilistic Models for Dependency Parsing: An Exploration",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING-96)",
"volume": "",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisner, J. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proceed- ings of the 16th International Conference on Com- putational Linguistics (COLING-96), pages 340- 345, Copenhaen.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bilexical Grammars and their Cubictime Parsing Algorithms",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "29--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisner, J., 2000. Bilexical Grammars and their Cubic- time Parsing Algorithms, pages 29-62. Kluwer Academic Publishers.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Latent Variable Model of Synchronous Syntactic-Semantic Parsing for Multiple Languages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gesmundo, A., J. Henderson, P. Merlo, and I. Titov. 2009. A Latent Variable Model of Syn- chronous Syntactic-Semantic Parsing for Multiple Languages. In Proceedings of the 13th Confer- ence on Computational Natural Language Learning (CoNLL-2009), Boulder, Colorado, USA., June 4-5.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Shared Task: Syntactic and Semantic Dependencies in Multiple Languages",
"authors": [],
"year": null,
"venue": "Proceedings of the 13th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. In Proceedings of the 13th",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dependencybased Syntactic-Semantic Analysis with PropBank and NomBank",
"authors": [
{
"first": "R",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Shared Task Session of CoNLL-2008",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johansson, R. and P. Nugues. 2008. Dependency- based Syntactic-Semantic Analysis with PropBank and NomBank. In Proceedings of the Shared Task Session of CoNLL-2008, Manchester, UK.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Online Learning of Approximate Dependency Parsing Algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McDonald, R. and F. Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In In Proc. of EACL, pages 81-88.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Online Large-margin Training of Dependency Parsers",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McDonald, R., K. Crammer, and F. Pereira. 2005. On- line Large-margin Training of Dependency Parsers. In Proc. ACL, pages 91-98.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Integrating Graph-Based and Transition-Based Dependency Parsers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL-08",
"volume": "",
"issue": "",
"pages": "950--958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J. and R. McDonald. 2008. Integrating Graph- Based and Transition-Based Dependency Parsers. In ACL-08, pages 950-958, Columbus, Ohio.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Memory-Based Dependency Parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 8th CoNLL",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J., J. Hall, and J. Nilsson. 2004. Memory- Based Dependency Parsing. In Proceedings of the 8th CoNLL, pages 49-56, Boston, Massachusetts.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "An Efficient Algorithm for Projective Dependency Parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2003,
"venue": "8th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "149--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J. 2003. An Efficient Algorithm for Pro- jective Dependency Parsing. In 8th International Workshop on Parsing Technologies, pages 149-160, Nancy, France.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Non-Projective Dependency Parsing in Expected Linear Time",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP",
"volume": "",
"issue": "",
"pages": "351--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J. 2009. Non-Projective Dependency Parsing in Expected Linear Time. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 351-359, Suntec, Singapore.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Random Features for Large-Scale Kernel Machines",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Recht",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahimi, A. and B. Recht. 2008. Random Features for Large-Scale Kernel Machines. In Platt, J.C., D. Koller, Y. Singer, and S. Roweis, editors, Ad- vances in Neural Information Processing Systems, volume 20. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Parsing Syntactic and Semantic Dependencies for Multiple Languages with a Pipeline Approach",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "D",
"middle": [
"Ji"
],
"last": "Jing Wan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ren, H., D. Ji Jing Wan, and M. Zhang. 2009. Pars- ing Syntactic and Semantic Dependencies for Mul- tiple Languages with a Pipeline Approach. In Pro- ceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009), Boul- der, Colorado, USA., June 4-5.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Incremental Integer Linear Programming for Non-projective Dependency Parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "129--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riedel, S. and J. Clarke. 2006. Incremental Inte- ger Linear Programming for Non-projective Depen- dency Parsing. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing, pages 129-137, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hash Kernels for Structured Data",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Petterson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
}
],
"year": 2009,
"venue": "In Journal of Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shi, Q., J. Petterson, G. Dror, J. Langford, A. Smola, and S.V.N. Vishwanathan. 2009. Hash Kernels for Structured Data. In Journal of Machine Learning.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A Latent Variable Model for Generative Dependency Parsing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "144--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Titov, I. and J. Henderson. 2007. A Latent Variable Model for Generative Dependency Parsing. In Pro- ceedings of IWPT, pages 144-155.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Statistical Dependency Analysis with Support Vector Machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamada, H. and Y. Matsumoto. 2003. Statistical De- pendency Analysis with Support Vector Machines. In Proceedings of IWPT, pages 195-206.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training -Hash Kernel for n \u2190 1 to E // iteration over the training epochs for i \u2190 1 to I // iteration over the training exmaplesk \u2190 (n \u2212 1) * I + i \u03b3 \u2190 E * I \u2212 k + 2 //passive-aggressive weight t s e,k ; A \u2190 extr.-features-&-calc-arrays(i, \u2212 \u2192 w ) ; t e e,k t s p,k ; yp \u2190 predicte-projective-parse-tree(A);t e p,k t s a,k ; ya \u2190 non-projective-approx.(yp,A); t e a,k update \u2212 \u2192 w , \u2212 \u2192 v according to \u2206(yp, yi) and \u03b3 w = v/(E * I) // average",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "88.32 17.58 88.39 17.34 179669557 88.34 17.65 88.28 17.84",
"content": "<table><tr><td/><td/><td/><td>size |</td></tr><tr><td>vector size</td><td colspan=\"2\">h1 #(h1)</td><td>h2 #(h2)</td></tr><tr><td colspan=\"2\">411527 85.67</td><td colspan=\"2\">0.41 85.74</td><td>0.41</td></tr><tr><td colspan=\"2\">3292489 87.82</td><td colspan=\"2\">3.27 87.97</td><td>3.28</td></tr><tr><td colspan=\"2\">10503061 88.26</td><td colspan=\"2\">8.83 88.35</td><td>8.77</td></tr><tr><td colspan=\"4\">21006137 88.19 12.58 88.41 12.53</td></tr><tr><td colspan=\"4\">42012281 88.32 12.45 88.34 15.27 115911564</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "The labeled attachment scores for different weight vector sizes and the number of nonzero values in the feature vectors in millions.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "Features Groups. l represents the label, h the head, d the dependent, s a sibling, and g a grandchild,",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "Top CoNLL 09 85.77 (1) 87.86(1) 79.19(4) 80.38 (1) 89.88 (2) 87.48(2) 92.57(3) 87.64(1)",
"content": "<table><tr><td>System</td><td colspan=\"2\">Average Catalan</td><td colspan=\"2\">Chinese Czech</td><td>English</td><td colspan=\"3\">German Japanese Spanish</td></tr><tr><td colspan=\"2\">Baseline Parser 85.10</td><td>85.70</td><td>76.88</td><td>76.93</td><td>90.14</td><td>87.64</td><td>92.26</td><td>86.12</td></tr><tr><td>this work</td><td>86.33</td><td>87.45</td><td>76.99</td><td>80.96</td><td>90.33</td><td>88.06</td><td>92.47</td><td>88.13</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}