|
{ |
|
"paper_id": "C98-1048", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:30:04.575995Z" |
|
}, |
|
"title": "Experiments with Learning Parsing Heuristics", |
|
"authors": [ |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Delisle", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Drpartement de mathrmatiques et d'informatique", |
|
"institution": "Universit6 du Qurbec h Trois-Rivirres Trois-Rivirres", |
|
"location": { |
|
"postCode": "G9A 5H7", |
|
"region": "Qurbec", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Li~tourneau", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Ottawa", |
|
"location": { |
|
"postCode": "K1N 6N5", |
|
"settlement": "Ottawa", |
|
"region": "Ontario, Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Stan", |
|
"middle": [], |
|
"last": "Matwin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Ottawa", |
|
"location": { |
|
"postCode": "K1N 6N5", |
|
"settlement": "Ottawa", |
|
"region": "Ontario, Canada" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Any large language processing software relies in its operation on heuristic decisions concerning the strategy of processing. These decisions are usually \"hard-wired\" into the software in the form of handcrafted heuristic rules, independent of the nature of the processed texts. We propose an alternative, adaptive approach in which machine learning techniques learn the rules from examples of sentences in each class. We have experimented with a variety of learning techniques on a representative instance of this problem within the realm of parsing. Our approach lead to the discovery of new heuristics that perform significantly better than the current hand-crafted heuristic. We discuss the entire cycle of application of machine learning and suggest a methodology for the use of machine learning as a technique for the adaptive optimisation of language-processing software.", |
|
"pdf_parse": { |
|
"paper_id": "C98-1048", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Any large language processing software relies in its operation on heuristic decisions concerning the strategy of processing. These decisions are usually \"hard-wired\" into the software in the form of handcrafted heuristic rules, independent of the nature of the processed texts. We propose an alternative, adaptive approach in which machine learning techniques learn the rules from examples of sentences in each class. We have experimented with a variety of learning techniques on a representative instance of this problem within the realm of parsing. Our approach lead to the discovery of new heuristics that perform significantly better than the current hand-crafted heuristic. We discuss the entire cycle of application of machine learning and suggest a methodology for the use of machine learning as a technique for the adaptive optimisation of language-processing software.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Any language processing program--in our case, a top-down parser which outputs only the first tree it could find--must make decisions as to what processing strategy, or rule ordering, is most appropriate for the problem (i.e. string) at hand. Given the size and the intricacy of the rule-base and the goal (to optimise a parser's precision, or recall, or even its speed), this becomes a complex decision problem. Without precise knowledge of the kinds of texts that will be processed, these decisions can at best be educated guesses. In the parser we used, they were performed with the help of hand-crafted heuristic rules, which are briefly presented in section 2. Even when the texts are available to fine-tune the parser, it is not obvious how these decisions are to be made from texts alone. Indeed, the decisions may often be expressed as rules whose representation is in terms which are not directly or easily available from the text (e.g. non-terminals of the grammar of the language in which the texts are written). Hence, any technique that may automatically or semi-automatically adapt such rules to the corpus at hand will be valuable. As it is often the case, there may be a linguistic shift in the kinds of texts that are processed, especially if the linguistic task is as general as parsing. It is then interesting to adapt the \"version\" of the parser to the corpus at hand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We report on an experiment that targets this kind of adaptability. We use machine learning as an artificial intelligence technique that achieves adaptability. We cast the task described above as a classification task: which, among the parser's toplevel rules, is most appropriate to launch the parsing of the current input string? Although we restricted ourselves to a subset of a parser, our objective is broader than just applying an existing learning system on this problem. What is interesting is: a) definition of the attributes in which examples are given, so that the attributes are both obtainable automatically from the text and lead to good rules--this is called \"feature engineering\"; b) selection of the most interesting learned rules; c) incorporation of the learned rules in the parser; d) evaluation of the performance of the learned rules after they have been incorporated in the parser. It is the lessons from the whole cycle that we followed in the work that we report here, and we suggest it as a methodology for an adaptiw~ optimisation of language processing programs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rule-based parser we used was DIPETT [Delisle 1994 ]: it is a top-down, depth-first parser, augmented with a few look-ahead mechanisms, which returns the first analysis (parse tree). The fact that our parser produces only a single analysis, the \"best\" one according to its hand-crafted heuristics, is part of the motivation for this work. When DIPETT is given an input string, it first selects the top-level roles it is to attempt, as well as their ordering in this process. Ideally, the parser would find an optimal order that minimises parsing time and maximises parsing accuracy by first selecting the most promising rules. For example, there is no need to treat a sentence as multiply coordinated or compound when the data contains only one verb. DIPETT has three top-level rules for declarative statements: i) MULT_COOR for multiple (normally, three or more) coordinated sentences; ii) COMPOUND for compound sentences, that is, correlative and simple coordination (of, normally, two sentences); iii) NONCOMPOUND for simple and complex sentences, that is, a single main clause with zero or more subordinate clauses ( [Quirk et el. 1985] ). To illustrate the data that we worked with and the classes for which we needed the rules, here are two sentences (from the Brown corpus) used in our experiments: \"And know, while all this went on, that there was no real reason to suppose that the murderer had been a guest in either hotel.\" iS a non-compound sentence, and \"Even I can remember nothing but ruined cellars and tumbled pillars, and nobody has lived there in the memory of any living man.\" is a compound sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 54, |
|
"text": "[Delisle 1994", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1125, |
|
"end": 1144, |
|
"text": "[Quirk et el. 1985]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The existing hand-crafted heuristics", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The current hand-crafted heuristic ( [Delisle 1994] ) is based on three parameters, obtained after (non-disambiguating) lexical analysis and before parsing: 1) the number of potential verbs I in the data, 2) the presence of potential coordinators in the data, and 3) verb density (roughly speaking, it indicates how potential verbs are distributed). For instance, low density means that verbs are scattered throughout the input string; high density means that the verbs appear close to each other in the input string, as in a conjunction i A \"potential\" verb may actually turn out to be, say, a noun, but only parsing can tell us how such a lexical ambiguity has been resolved. If the input were preprocessed by a tagger, the ambiguity might disappear. of verbs such as \"Verbl and Verb2 and Verb3\". Given the input string's features we have just discussed, DIPETT's algorithm for top-level rule selection returns an ordered list of up to 3 of the rules COMPOUND, NONCOMPOUND, and MULT_COOR to be attempted when parsing this string. For the purposes of our experiment, we simplified the situation by neglecting the MULT COOR rule since it was rarely needed when parsing reallife text. Thus, the original problem went from a 3class to a 2-class classification problem: COMPOUND or NON_COMPOUND.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 51, |
|
"text": "[Delisle 1994]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The existing hand-crafted heuristics", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As any heuristic, the top-level rule selection mechanism just described is not perfect. Among the principal difficulties, the most important are: i) the accuracy of the heuristic is limited and ii) the internal choices are relatively complex and somewhat obscure from a linguist's viewpoint. The aim of this research was to use classification systems as a tool to help developing new knowledge for improving the parsing process. To preserve the broad applicability of DIPETT, we have emphasised the generality of the results and did not use any kind of domain knowledge. The sentences used to build the classifiers and evaluate the performance have been randomly selected from five unrelated real corpora. Typical classification systems (e.g. decision trees, neural networks, instance based learning) require the data to be represented by feature vectors. Developing such a representation for the task considered here is difficult. Since the top-level rule selection heuristic is one of the first steps in the parsing process, very little information for making this decision is available at the early stage of parsing. All the information available at this phase is provided by the (non-disambiguating) lexical analysis that is performed before parsing. This preliminary analysis provides four features: 1) number of potential verbs in the sentence, 2) presence of potential coordinators, 3) verb density, and 4) number of potential auxiliaries. As mentioned above, only the first three features are actually used by the current hand-crafted heuristic. However, preliminary experiments have shown that no interesting knowledge can be inferred by using only these four features. We then decided to improve our representation by the use of DIPETT's fragmentary parser: an optional parsing mode in which DIPETT does not attempt to produce a single structure for the current input string but, rather, analyses a string as a sequence of major constituents (i.e. noun, verb, prepositional and adverbial phrases). The new features obtained from fragmentary parsing are: the number of fragments, the number of \"verbal\" fragments (fragments that contain at least one verb), number of tokens skipped, and the total percentage of the input recognised by the fragmentary parser. The fragmentary parser is a cost-effective solution to obtain a better representation of sentences because it is very fast--on average, less than one second of CPU time for any sentence--in comparison to full parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Moreover, the information obtained from the fragmentary parser is adequate for the task at hand because it represents well the complexity of the sentence to be parsed. In addition to the features obtained from the lexical analysis and those obtained from the fragmentary parser, we use the string length (number of tokens in the sentence) to describe each sentence. The attribute used to classify the sentences, provided by a human expert, is called rule-to-attempt and it can take two values: compound or non-compound, according to the type of the sentence. To surnmarise, we used the ten following features to represent each sentence: 1) string-length: number of tokens (integer); 2) num-potential-verbs: number of potential verbs (integer); 3) nnm-potential-auxiliary: number of potential auxiliaries (integer); 4) verbdensity: a flag that indicates if all potential verbs are separated by coordinators (boolean); 5) nbr-potentialcoordinators: number of potential coordinators (integer); 6) num-fragments: number of fragments used by the fragmentary parser (integer); 7) numverbal-fragments: number of fragments that contain at least one potential verb (integer); 8) num-tokensskip: number of tokens not considered by the fragmentary parser (integer); 9) %-input-recognized: percentage of the sentence recognized, i.e. not skipped (real); 10) rule-to-attempt: type of the sentence (COMPOUND or NON-COMPOUND).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We built the first data set by randomly selecting 300 sentences from four real texts: a software user manual, a tax guide, a junior science textbook on weather phenomena, and the Brown corpus. Each sentence was described in terms of the above features, which are of course acquired automatically by the lexical analyser and the fragmentary parser, except for rule-to-attempt as mentioned above. After a preliminary analysis of these 300 sentences, we realised that we had unbalanced numbers of examples of compound and non-compound sentences: non-compounds are approximately five times more frequent than compounds. However, it is a well-known fact in machine learning that such unbalanced training sets are not suitable for inductive learning. For this reason, we have re-sampled our texts to obtain roughly an equal number of non-compound and compound sentences (55 compounds and 56 noncompounds).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our experiment consisted in running a variety of attribute classification systems: IMAFO ( [Famili & Turney 1991] ), C4.5 ( [Quinlan 1993]) , and different learning algorithms from MLC++ ( [Kohavi et al. 1994] ). IMAFO includes an enhanced version of ID3 and an interface to C4.5 (we used both engines in our experimentation). MLC++ is a machine learning library developed in C++. We experimented with many algorithms included in MLC++.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 113, |
|
"text": "[Famili & Turney 1991]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 139, |
|
"text": "[Quinlan 1993])", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 209, |
|
"text": "[Kohavi et al. 1994]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We concentrated mainly on learning algorithms that generate results in the form of rules. For this project, rules are more interesting than other form of results because they are relatively easy to integrate in a rule-based parser and because they can be evaluated by experts in the domain. However, for accuracy comparison, we have also used learning systems that do not generate rules in terms of the initial representation: neural networks and instance-based systems. We randomly divided our data set into the training set (2/3 of the examples, or 74 instances) and the testing set (1/3 of the examples, or 37 instances). The error rates presented in Table 1 for the first four systems (decision rules systems) represent the average rates for all rules generated by these systems. However, not all rules were particularly interesting. We kept only some of them for further evaluation and integration in the parser. Our selection criteria were: 1) the estimated error rate, 2) the \"reasonability\" (only rules that made sense for a computational linguist were kept), 3) the readability (simple rules are preferred), and 4) the novelty (we discarded rules that are already in the parser). Tables 2 and 3 present rules that satisfy all the above the criteria: Table 2 focuses on rules to identify compound sentences while Table 3 presents rules to identify non-compound sentences. The error rate for each rule is also given. These error rates were obtained by a 10 fold cross-validation test.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 654, |
|
"end": 661, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1259, |
|
"end": 1266, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1321, |
|
"end": 1328, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Rules to identify COMPOUND sen-Error tences rate (%) hum-potential-verbs <= 3 AND num-potential-coordinators > 0 AND hum-verbal-fragments > 1 hum-fragments > 7 hum-fragments > 5 AND num-verbal-fragments <= 2 string-length <= 17 AND hum-potential-coordinators > 0 AND num-verbal-fragments > 1 num-potential-verbs > 1 AND num-potential-verbs <= 3 AND num-potential-coordinators > 0 AND num-fragments > 4 num-potential-coordinators > 0 AND num-fragments >= 7 10.5 '9.4 23.9 5.4 4.2 4.3 num-potential-coordinators > 0 AND 16.8 num-verbal-fragments > 1 hum-potential-coordinators > 0 AND num-fragments < 7 AND 4.7 string-length < 18 The error rates that we have obtained are quite respectable for a two-class learning problem given the volume of available examples. Moreover, the rules are justified and make sense. They are also very compact in comparison with the original hand-crafted heuristics. We will see in section 4 how these rules behave on unseen data from a totally different text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning rules from sentences", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[ rate (%) num-potential-verbs <= 3 AND num-verbal-fragments <= 1 string-length > 10 AND num-potential-verbs <= 3 AND num-fragments <= 4 string-length <= 21 AND hum-potential-coordinators = 0 hum-potential-coordinators = 0 AI'~D 8.3 6.7 5.6 9.7 num-fragments <= 7 Attribute classification systems such as those used during the experiment reported here are highly sensitive to the adequacy of the features used to represent the instances. For our task (parsing), these features were difficult to find and we had only a rough idea about their appropriateness. For this reason, we felt that better results could be obtained by transforming the original instance space into a more adequate space by creating new attributes. In machine learning research, this process is referred as constructive learning, or constructive induction ( [Wnek & Michalski 1994] ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 829, |
|
"end": 852, |
|
"text": "[Wnek & Michalski 1994]", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rules to identify NON-[ Error COMPOUND sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We even attempted to use principal component analysis (PCA) ( [Johnson & Wichern 1992] ) as a technique of choice for simple constructive learning but we did not get very impressive results. We see two reasons for this. The primary reason is that the ratio between the number of examples and the number of attributes is not high enough for PCA to derive high-quality new attributes. The second reason is that the original attributes are already highly non-redundant. It is important to note that these rules do not satisfy the reasonability criteria applied to the original representation. In fact, losing the understandability of the attributes is the usual consequence of almost all approaches that change the representation of instances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 86, |
|
"text": "[Johnson & Wichern 1992]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rules to identify NON-[ Error COMPOUND sentences", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We explained in section 3 how we derived new parsing heuristics with the help of machine learning techniques. The next step was to evaluate how well would the new rules perform if we replaced the parser's current hand-crafted heuristics with the new ones. In particular, we wanted to evaluate the accuracy of the heuristics in correctly identifying the appropriate rule, COMPOUND or NON COMPOUND, that should first be attempted by the parser. This goal was prompted by an earlier evaluation of DIPETT in which it was noted that a good proportion of questionable parses (i.e. either bad parses or correct but too timeconsuming parses) were caused by a bad first attempt, such as attempting COMPOUND instead of NON_COMPOUND.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the new rules", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our machine learning experiments lead us to two classes of rules obtained from a variety of classifiefs and concerned only with the notion of compoundness: 1) those predicting a COMPOUND sentence, and 2) those predicting a NON COMPOUND. The problem was then to decide what should be done with the set of new rules. More precisely, before actually implementing the new rules and including them in the parser, we first had to decide on an appropriate strategy for exploiting the set of new rules. We now describe the three implementations that we realised and evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From new rules to new parsers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The first implements only the rules for the COMPOUND class--one big rule which is a disjunct of all the learned rules for that class. And since there are only two alternatives, either COMPOUND or NONCOMPOUND, if none of the COMPOUND rules applies, the NONCOMPOUND class is predicted. This first implementation is referred to as C-Imp. The second implementation, referred to as NC-Imp, does exactly the opposite: i.e. it implements only the rules predicting the NONCOMPOUND class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "From new rules to new parsers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The third implementation, referred to as NC_C-Imp, benefits from the first two implementations. The class of a new sentence is determined by combining the output from C-Imp and NC-Imp. The combination of the output is done according to the following decision table in Table C The first two lines of this decision table are obvious since the outputs from both implementations are consistent. When the two implementations disagree, the NC C-Imp implementation predicts the non-compound. This prediction is justified by a bayesian argumentation. In the absence of any additional knowledge, we are forced to assign an equal probability of success to each of the two sets of rules and the most probable class becomes the one with the highest frequency. Thus, in general, non-compound sentences are more frequent then compound ones. One obvious way to improve this third implementation would be to precisely evaluate the accuracies of the two sets of rules and then incorporate these accuracies in the decision process.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 277, |
|
"text": "Table C", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "From new rules to new parsers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To perform the evaluation, we randomly sampled 200 sentences from a new corpus on mechanics ( [Atkinson 1990 ]): note that this text had not been used to sample the sentences used for learning. Out of these 200 sentences, 10 were discarded since they were not representative (e.g. one-word \"sentences\"). We ran the original implementation of DIPETT plus the three new implementations described in the previous section on the remaining 190 test sentences. Table 5 presents the results. The error-rate, the standard deviation of the error-rate and the p-value are listed for each implementation. The p-value gives the probability that DIPETT's original hand-crafted heuristics are better than the new heuristics. In other words, a small p-value means an increase in performance with a high probability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 108, |
|
"text": "[Atkinson 1990", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 462, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The results", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Original heur. C-Imp NC-Imp NC_C-Imp We observe that all new automatically-derived heuristics did beat DIPETT's hand-crafted heuristics and quite clearly. The results from the third implementation (i.e. NC_C-Imp) are especially remarkable: with a confidence of over 99%, we can affirm that the NC_C-lmplementation will outperform DIPETT's original heuristic. We also note that the error rate drops by 35% of its value for the original heuristic. Similarly, with a confidence of 87.4%, we can affirm that the implementation that uses only the C-rules (i.e. C-Imp) will perform better then DIPETT's current heuristics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These very good results are also amplified by the fact that the testing described in this evaluation was done on sentences totally independent from the ones used for training. Usually, in machine learning research, the training and the testing sets are sampled from the same original data set, and the kind of \"out-of-sample\" testing that we perform here has only recently come to the attention of the learning community ( [Ezawa et al. 1996] ). Our experiments have shown that it is possible to infer rules that perform very well and are highly meaningful in the eyes of an expert even if the training set is relatively small. This indicates that the representation of sentences that we chose for the problem was adequate. Finally, an other important output of our research is the identification of the most significant attributes to distinguish non-compound sentences from compound ones. This alone is valuable information to a computational linguist. Only five out of ten original attributes are used by the learned rules, and all of them are cheap to compute: two attributes are derived by fragmentary parsing (number of verbal fragments and number of fragments), and three are lexical (number of potential verbs, length of the input string, and presence of potential coordinators).", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 442, |
|
"text": "[Ezawa et al. 1996]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There have been successful attempts at using machine learning in search of a solution for linguistic tasks, e.g. discriminating between discourse and sentential senses of cues ( [Litman 1996]) or resolution of coreferences in texts ([McCarthy & Lehnert 1995] ). Like our work, these problems are cast as classification problems, and then machine learning (mainly C4.5) techniques are used to induce classifiers for each class. What makes these applications different from ours is that they have worked on surface linguistic or mixed surface linguistic and intonational representation, and that the classes are relatively balanced, while in our case the class of compound sentences is much less numerous than the class of non-composite sentences. Such unbalanced classes create problems for the majority of inductive learning systems. A distinctive feature of our work is the fact that we used machine learning techniques to improve an existing rule-based natural language processor from the inside. This contrasts with approaches where there are essentially no explicit rules, such as neural networks (e.g. [Buo 1996]) , or approaches where the machine learning algorithms attempt to infer--via deduction (e.g. [Samuelsson 1994 ]), induction (e.g. [Theeramunkong et al. 1997] ; [Zelle & Mooney 1994] ) under user cooperation (e.g. [Simmons & Yu 1992] ; [Hermjakob & Mooney 1997] ), transformation-based error-driven learning (e.g. [Brill 1993]) , or even decision trees (e.g. [Magerman 1995] )--a grammar from raw or preprocessed data. In our work, we do not wish to acquire a grammar: we have one and want to devise a mechanism to make some of its parts adaptable to the corpus at hand or, to improve some aspect of its performance. Other researchers, such as [Lawrence et al. 1996] , have compared neural networks and machine learning methods at the task of sentence classification. In this task, the system must classify a string as either grammatical or not. We do not content ourselves with results based on a grammatical/ungrammatical dichotomy. We are looking for heuristics, using relevant features, that will do better than the current ones and improve the overall performance of a natural language processor: this is a very difficult problem (see, e.g., [Huyck & Lytinen 1993] ). One could also look at this problem as one of optimisation of a rule-based system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 192, |
|
"text": "[Litman 1996])", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 258, |
|
"text": "([McCarthy & Lehnert 1995]", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1107, |
|
"end": 1118, |
|
"text": "[Buo 1996])", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1227, |
|
"text": "[Samuelsson 1994", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1248, |
|
"end": 1275, |
|
"text": "[Theeramunkong et al. 1997]", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1278, |
|
"end": 1299, |
|
"text": "[Zelle & Mooney 1994]", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1331, |
|
"end": 1350, |
|
"text": "[Simmons & Yu 1992]", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1353, |
|
"end": 1378, |
|
"text": "[Hermjakob & Mooney 1997]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1431, |
|
"end": 1444, |
|
"text": "[Brill 1993])", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1476, |
|
"end": 1491, |
|
"text": "[Magerman 1995]", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1761, |
|
"end": 1783, |
|
"text": "[Lawrence et al. 1996]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 2264, |
|
"end": 2286, |
|
"text": "[Huyck & Lytinen 1993]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Work somewhat related to ours was conducted by [Samuelsson 1994 ] who used explanation-based generalisation to extract a subset of a grammar that would parse a given corpus faster than the original, larger grammar--- [Neumann 1997 ] also used EBL but for a generation task. In our case, we are not looking for a subset of the existing rules but, rather, we are looking for brand new rules that would replace and outperform the existing rules. We should also mention the work of [Soderland 1997] who also worked on the comparison of automatically learned and hand-crafted rules for text analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 63, |
|
"text": "[Samuelsson 1994", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 230, |
|
"text": "[Neumann 1997", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 494, |
|
"text": "[Soderland 1997]", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have presented an experiment which demonstrates that machine learning may be used as a technique to optimise in an adaptive manner the high-level decisions that any parser must make in the presence of incomplete information about the properties of the text it analyses. The results show clearly that simple and understandable rules learned by machine learning techniques can surpass the performance of heuristics supplied by an experienced computational linguist. Moreover, these very encouraging results indicate that the representation that we chose and discuss was an adequate one for this problem. We feel that a methodology is at hand to extend and deepen this approach to language processing programs in general. The methodology consists of three main steps: 1) feature engineering, 2) learning, using several different available learners, 3) evaluation, with the recommendation of using the \"out-ofsample\" approach to testing. Future work will focus on improvements to constructive learning; on new ways of integrating the rules acquired by different learners in the parser; and on the identification of criteria for selecting parser rules that have the best potential to benefit from the generalisation of our results. Les syst6mes ou programmes de traitement de la langue naturelle doivent prendre des ddcisions quant au choix des meilleures strat6gies ou r6gles h appliquer en cours de r6solution d'un problbme particulier. Pour un analyseur syntaxique constitu6 d'une base de r6gles symboliques, le cas auquel nous nous intdressons ici, ces ddcisions peuvent consister h s61ectionner les r~gles ou l'ordonnancement de celles-ci permettant de produire la plus rapide ou la plus prdcise analyse syntaxique pour un 6nonc6, un type d'~noncd ou m~me un corpus spdcifique. La complexit6 de telles bases de r6gles grammaticales et ieurs subtilitds computationnelles et linguistiques font en sorte que la prise de ces d6cisions constitue un problbme difficile. Nous nous sommes donc fix6 comme objectif de trouver des techniques qui permettraient d'apprendre des heuristiques performantes de prise de ddcision afin de les incorporer hun analyseur syntaxique existant. Pour atteindre une telle adaptabilit6, nous avons adopt6 une approche d'apprentissage automatisd supportde par l'utilisation de syst~mes de classification automatique.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Nos travaux ont 6td rdalisds sur un analyseur syntaxique h large couverture syntaxique de l'anglais dcrit et ont portd sur un sous-ensemble prdcis de celui-ci : le niveau le plus haut qui doit ddcider avec quelle(s) r6gle(s)---et, s'il yen a plusieurs, dans quel ordre--lancer l'analyse syntaxique de l'6nonc6 en cours de traitement, selon que cet dnoncd semble comporter des phdnom~nes de coordination structurelle plus ou moins compliquds. Ce problbme de ddcision se traduit naturellement en un probl6me de classification, d'o5 notre utilisation de syst5mes de classification automatique de plusieurs types : r~gles de ddcision, bas6 sur les instances, rdseaux de croyances et rdseaux de neurones. Soulignons que notre analyseur syntaxique possddait ddjh des r~gles heuristiques dddides h ce probl6me de ddcision. Elles avaient 6t6 compos6es par le premier auteur sans avoir recours ?~ aucun mdcanisme automatique. Nous ddsirions maintenant trouver de nouvelles heuristiques qui seraient encore plus performantes que les anciennes et qui pourraient donc les remplacer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expdrimentation en apprentissage d'heuristiques pour l'analyse syntaxique", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "La mdthodologie que nous avons utilis6e est la suivante. Premi6rement, nous avons d6fini les attributs les plus pertinents pour reprdsenter les exemples (6nonc6s). II importait d'identifier des attributs facilement calculables de fa~on automatique et qui permettraient d'obtenir de nouvelles heuristiques intdressantes. Par exemple, la prfisence de conjonctions de coordination et la iongueur de l'dnonc6 sont deux attributs utiles. Deuxi6mement, nous avons soumis les exemples, traduits en termes des attributs sdlectionn6s, aux syst~mes classificateurs afin d'obtenir des rbgles. Nous avons ensuite sdlectionnd les r6gles les plus int6ressantes, c'est-h-dire celles qui dtaient les plus discriminantes tout en demeurant intelligibles dans une perspective linguistique. Troisi6mement, nous avons incorpor6 les r6gles s61ectionn6es h notre analyseur syntaxique en remplacement des anciennes. Finalement, nous avons 6valud la nouvelle version de l'analyseur obtenue grace h ces nouvelles r6gles et effectual une comparaison avec l'ancienne version. Les rdsultats que nous avons obtenus se rdsument ainsi : nous avons trouv~ de nouvelles heuristiques qui sont significativement meilleures que les anciennes et qui, en particulier, poss6dent un taux d'erreur de 35% inf6rieur h celui des anciennes. Qui plus est, ces r6sultats ont 6t6 obtenus sur des 6nonc6s tout h fait inddpendants de ceux utilis6s pour l'entra~nement avec les syst6mes classificateurs. Ces r6sultats d6montrent que des techniques d'apprentissage automatis6 peuvent concourir ~. I'optimisation adaptive de certaines d6cisions importantes en analyse syntaxique.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Expdrimentation en apprentissage d'heuristiques pour l'analyse syntaxique", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work described here was supported by the Natural Sciences and Engineering Research Council of Canada.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Mechanics of Small Engines", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Atkinson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atkinson, H.F. (1990) Mechanics of Small Engines. New York: Gregg Division, McGraw-Hill.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic Grammar Induction and Parsing Free Text: A Transformation-Based Approach", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of the 31st Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "259--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill E. (1993) \"Automatic Grammar Induction and Parsing Free Text: A Transformation-Based Approach\", Proc. of the 31st Annual Meeting of the ACL, pp.259-265.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "FeasPar--A Feature Strncture Parser Learning to Parse Spontaneous Speech", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Buo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buo F.D. (1996) \"FeasPar--A Feature Strncture Parser Learning to Parse Spontaneous Speech\", Ph.D. Thesis, Fakultat ftir Informatik, Univ. Karlsruhe, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Text Processing without a priori Domain Knowledge: Semi-Automatic Linguistic for Incremental Knowledge Acquisition", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Delisle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Delisle S. (1994) \"Text Processing without a priori Domain Knowledge: Semi-Automatic Linguistic for Incremental Knowledge Acquisition\", Ph.D. Thesis, Dept. of Compu- ter Science, Univ. of Ottawa. Published as technical report TR-94-02.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ezawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Norton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. of the 13th blternational Conf. on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ezawa K., Singh M. & Norton S. (1996) \"Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management\", Proc. of the 13th blternational Conf. on Machine Learning, pp.139-147.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Intelligently Helping the Human Planner in Industrial Process Planing", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Famili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "109--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Famili A. & Turney P. (1991) \"Intelligently Helping the Human Planner in Industrial Process Planing\", AI EDAM - AI for Engineering Design Analysis and Manufacturing, 5(2), pp. 109-124.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning Parse and Translation Decisions From Examples With Rich Context", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of ACL-EACL Conf", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "482--489", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hermjakob U. & Mooney R.J. (1997) \"Learning Parse and Translation Decisions From Examples With Rich Context\", Proc. of ACL-EACL Conf., pp.482-489.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Efficient Heuristic Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Huyck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Lytinen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of the llth National Conf on AI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "386--391", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huyck C.R. & Lytinen S.L. (1993) \"Efficient Heuristic Natural Language Parsing\", Proc. of the llth National Conf on AI, pp.386-391.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Applied Multivariate Statistical Analysis", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Wichern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johnson R.A. & Wichern D.W. (1992) Applied Multivariate Statistical Analysis, Prentice Hall.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "MLC++: A machine learning library in C++", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kohavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Manley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Pleger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Tools with AI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--743", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kohavi R., John G., Long R., Manley D. & Pleger K. (1994) \"MLC++: A machine learning library in C++\", Tools with AI, IEEE Computer Society Press, pp.740-743.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Natural Language Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Fong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lee Giles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Symbolic, Connectionnist, attd Statistical Approaches to Learning for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence S., Fong S. & Lee Giles C. (1996) \"Natural Lan- guage Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods\", in S. Wermter, E. Riloff and G. Scheler (eds.), Symbolic, Connectionnist, attd Statistical Approaches to Learning for Natural Language Processing, Lectures Notes in AI, Springer-Verlag, pp.33-47.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Cue Phrase Classification Using Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Litman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Journal of Al Research", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "53--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Litman D. (1996) \"Cue Phrase Classification Using Machine Learning\", Journal of Al Research, 5, pp.53-95.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Statistical Decision-Tree Models for Parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of tire 33rd Annual Meeting of tile ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magerman D. (1995) \"Statistical Decision-Tree Models for Parsing\", Proc. of tire 33rd Annual Meeting of tile ACL, 276-283.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using Decision Trees for Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Lehnert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of IJCAI-95", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1050--1055", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McCarthy J. & Lehnert W.G. (1995) \"Using Decision Trees for Coreference Resolution\", Proc. of IJCAI-95, pp.1050- 1055.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of ACL-EACL Conf", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "214--221", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neumann G. (1997) \"Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation\", Proc. of ACL-EACL Conf, pp.214-221.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "C4.5: Programs for Machble Lealvling", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Quinlan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quinlan J.R. (1993) C4.5: Programs for Machble Lealvling, Morgan Kaufmann.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A Comprehensive Grammar of the English Language", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Greenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Leech", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Svartvik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quirk R., Greenbaum S., Leech G. & Svartvik J. (1985) A Comprehensive Grammar of the English Language, Longman.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Grammar Specialization Through Entropy Thresholds", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Samuelsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the 32nd Annual Meeting of ttre ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuelsson C. (1994) \"Grammar Specialization Through Entropy Thresholds\", Proc. of the 32nd Annual Meeting of ttre ACL, pp.188-195.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The Acquisition and Use of Context-dependent Grammars for English", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Simmons", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational Linguistics", |
|
"volume": "18", |
|
"issue": "4", |
|
"pages": "392--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simmons F.S. & Yu Y.H. (1992) \"The Acquisition and Use of Context-dependent Grammars for English\", Computational Linguistics, 18(4), pp.392-418.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Learning Text Analysis Rules for Domain-Specific Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Soderland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soderland S.G. (1997) \"Learning Text Analysis Rules for Domain-Specific Natural Language Processing\", Ph.D. Thesis, Dept. of Computer Science, Univ. of Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Theeramunkong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Kawaguchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Okumura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of the CEGDLE Workshop at ACL-EACL'97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theeramunkong T., Kawaguchi Y. & Okumura (1997) \"Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement\", Proc. of the CEGDLE Workshop at ACL-EACL'97, pp.78-83.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Hypothesis-driven constructive induction in AQ17-HCI: a method and experiments", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wnek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Michalski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Machine Learning", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "139--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wnek J. & Michalski R.S. (1994) \"Hypothesis-driven cons- tructive induction in AQ17-HCI: a method and experi- ments\", Machine Learning, 14(2), pp. 139-168.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Inducing Deterministic Prolog Parsers from Treebanks: A Machine Learning Approach", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the 12th National Conf. on AI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "748--753", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zelle J.M. & Mooney R.J. (1994) \"Inducing Deterministic Prolog Parsers from Treebanks: A Machine Learning Ap- proach\", Proc. of the 12th National Conf. on AI, pp.748- 753.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"text": "summarises the results obtained from different systems in terms of the error rates on the testing set. All systems gave results with an error rate below 20%.", |
|
"content": "<table><tr><td>SYSTEM</td><td>Type of system</td><td>Error rate</td></tr><tr><td>ID3</td><td>decision rules</td><td>16.2%</td></tr><tr><td>C4.5</td><td>decision rules</td><td>18.9%</td></tr><tr><td>'IMAFO'</td><td>decision rules</td><td>16.5%</td></tr><tr><td>oneR</td><td>decision rule (one)</td><td>15.6%</td></tr><tr><td>IB</td><td>instance-based</td><td>10.8%</td></tr><tr><td>aha-ib</td><td>instance-based</td><td>18.9%</td></tr><tr><td>naive-bayes</td><td>belief networks</td><td>16.2%</td></tr><tr><td>perceptron</td><td>neural networks</td><td>13.5%</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"content": "<table><tr><td>.</td><td/><td/></tr><tr><td>-Imp</td><td>NC-Imp</td><td>] Output of</td></tr><tr><td/><td/><td>NC_C-Imp</td></tr><tr><td>C</td><td>C</td><td>C</td></tr><tr><td>NC</td><td>NC</td><td>NC</td></tr><tr><td>NC</td><td>C</td><td>NC</td></tr><tr><td>C</td><td>NC</td><td>NC</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"content": "<table><tr><td>Err-</td><td>Std.</td><td>p-value</td></tr><tr><td>rate</td><td>dev.</td><td/></tr><tr><td>(%)</td><td/><td/></tr><tr><td>25.268</td><td>+3.2</td><td>---</td></tr><tr><td>20.526</td><td>_+2.9</td><td>0.126</td></tr><tr><td>22.105</td><td>_+3.0</td><td>0.229</td></tr><tr><td>16.316</td><td>\u2022 +2.7</td><td>0.009</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |