Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E99-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:37:44.305073Z"
},
"title": "Japanese Dependency Structure Analysis Based on Maximum Entropy Models",
"authors": [
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a dependency structure analysis of Japanese sentences based on the maximum entropy models. Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units. The dependency accuracy of our system is 87.2% using the Kyoto University corpus. We discuss the contribution of each feature set and the relationship between the number of training data and the accuracy.",
"pdf_parse": {
"paper_id": "E99-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a dependency structure analysis of Japanese sentences based on the maximum entropy models. Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units. The dependency accuracy of our system is 87.2% using the Kyoto University corpus. We discuss the contribution of each feature set and the relationship between the number of training data and the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency structure analysis is one of the basic techniques in Japanese sentence analysis. The Japanese dependency structure is usually represented by the relationship between phrasal units called 'bunsetsu.' The analysis has two conceptual steps. In the first step, a dependency matrix is prepared. Each element of the matrix represents how likely one bunsetsu is to depend on the other. In the second step, an optimal set of dependencies for the entire sentence is found. In this paper, we will mainly discuss the first step, a model for estimating dependency likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far there have been two different approaches to estimating the dependency likelihood, One is the rule-based approach, in which the rules are created by experts and likelihoods are calculated by some means, including semiautomatic corpusbased methods but also by manual assignment of scores for rules. However, hand-crafted rules have the following problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 They have a problem with their coverage. Because there are many features to find correct dependencies, it is difficult to find them manually. \u2022 They also have a problem with their consistency, since many of the features compete with each other and humans cannot create consistent rules or assign consistent scores. \u2022 As syntactic characteristics differ across different domains, the rules have to be changed when the target domain changes. It is costly to create a new hand-made rule for each domain. At/other approach is a fully automatic corpusbased approach. This approach has the potential to overcome the problems of the rule-based approach. It automatically learns the likelihoods of dependencies from a tagged corpus and calculates the best dependencies for an input sentence. We take this approach. This approach is taken by some other systems (Collins, 1996; Fujio and Matsumoto, 1998; Haruno et ah, 1998) . The parser proposed by Ratnaparkhi (Ratnaparkhi, 1997) is considered to be one of the most accurate parsers in English. Its probability estimation is based on the maximum entropy models. We also use the maximum entropy model. This model learns the weights of given features from a training corpus. The weights are calculated based on the frequencies of the features in the training data. The set of features is defined by a human. In our model, we use features of bunsetsu, such as character strings, parts of speech, and inflection types of bunsetsu, as well as information between bunsetsus, such as the existence of punctuation, and the distance between bunsetsus. The probabilities of dependencies are estimated from the model by using those features in input sentences. We assume that the overall dependencies in a whole sentence can be determined as the product of the probabilities of all the dependencies in the sentence. Now, we briefly describe the algorithm of dependency analysis. It is said that Japanese dependencies have the following characteristics.",
"cite_spans": [
{
"start": 854,
"end": 869,
"text": "(Collins, 1996;",
"ref_id": "BIBREF7"
},
{
"start": 870,
"end": 896,
"text": "Fujio and Matsumoto, 1998;",
"ref_id": "BIBREF10"
},
{
"start": 897,
"end": 916,
"text": "Haruno et ah, 1998)",
"ref_id": null
},
{
"start": 954,
"end": 973,
"text": "(Ratnaparkhi, 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Dependencies are directed from left to right",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Dependencies do not cross",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) A bunsetsu, except for the rightmost one, depends on only one bunsetsu (4) In many cases, the left context is not necessary to determine a dependency 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The analysis method proposed in this paper is designed to utilize these features. Based on these properties, we detect the dependencies in a sentence by analyzing it backwards (from right to left). In the past, such a backward algorithm has been used with rule-based parsers (e.g., (Fujita, 1988) ). We applied it to our statistically based approach. Because of the statistical property, we can incorporate a beam search, an effective way of limiting the search space in a backward analysis.",
"cite_spans": [
{
"start": 282,
"end": 296,
"text": "(Fujita, 1988)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a tokenization of a test corpus, the problem of dependency structure analysis in Japanese can be reduced to the problem of assigning one of two tags to each relationship which consists of two bunsetsus. A relationship could be tagged as \"0\" or \"1\" to indicate whether or not there is a dependency between the bunsetsus, respectively. The two tags form the space of \"futures\" for a maximum entropy formulation of our dependency problem between bunsetsus. A maximum entropy solution to this, or any other similar problem allows the computation of P(f[h) for any f from the space of possible futures, F, for every h from the space of possible histories, H. A \"history\" in maximum entropy is all of the conditioning data which enables you to make a decision among the space of futures. In the dependency problem, we could reformulate this in terms of finding the probability of f associated with the relationship at index t in the test corpus as: P(f]ht) = P(fl Information derivable from the test corpus related to relationship t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "The computation of P(f]h) in M.E. is dependent on a set of '`features\" which, hopefully, are helpful in making a prediction about the future. Like most current M.E. modeling efforts in computational linguistics, we restrict ourselves to features which are binary functions of the history and aAssumption (4) has not been discussed very much, but our investigation with humans showed that it is true in more than 90% of cases. future. For instance, one of our features is g 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "g(h,f) = t 0 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "Here \"has(h,z)\" is a binary function which returns true if the history h has an attribute x. We focus on attributes on a bunsetsu itself and those between bunsetsus. Section 3 will mention these attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "Given a set of features and some training data, the maximum entropy estimation process produces a model in which every feature gi has associated with it a parameter ai. This allows us to compute the conditional probability as follows (Berger et al., 1996) :",
"cite_spans": [
{
"start": 234,
"end": 255,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "P(flh) - YIia[ '(n'l) z~(h) (2) ~,i \u2022 (3) I i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "The maximum entropy estimation technique guarantees that for every feature gi, the expected value of gi according to the M.E. model will equal the empirical expectation of gi in the training corpus. In other words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "y]~ P(h, f). g,(h, f) h,! = y-~P(h).y~P~(Slh)-g,(h,1). (41 h !",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "Here /3 is an empirical probability and PME is the probability assigned by the M.E. model. We assume that dependencies in a sentence are independent of each other and the overall dependencies in a sentence can be determined based on the product of probability of all dependencies in the sentence. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability Model",
"sec_num": "2"
},
{
"text": "In our experiment, we used the Kyoto University text corpus (version 2) (Kurohashi and Nagao, 1997) , a tagged corpus of the Mainichi newspaper. For training we used 7,958 sentences from newspaper articles appearing from January 1st to January 8th, and for testing we used 1,246 sentences from articles appearing on January 9th. The input sentences were morphologically analyzed and their bunsetsus were identified. We assumed that this preprocessing was done correctly before parsing input sentences. If we used automatic morphological analysis and bunsetsu identification, the parsing accuracy would not decrease so much because the rightmost element in a bunsetsu is usually a case marker, a verb ending, or a adjective ending, and each of these is easily recognized. The automatic preprocessing by using public domain tools, for example, can achieve 97% for morphological analysis (Kitauchi et al., 1998) and 99% for bunsetsu identification (Murata et al., 1998) . We employed the Maximum Entropy tool made by Ristad (Ristad, 1998) , which requires one to specify the number of iterations for learning. We set this number to 400 in all our experiments.",
"cite_spans": [
{
"start": 72,
"end": 99,
"text": "(Kurohashi and Nagao, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 885,
"end": 908,
"text": "(Kitauchi et al., 1998)",
"ref_id": "BIBREF13"
},
{
"start": 945,
"end": 966,
"text": "(Murata et al., 1998)",
"ref_id": "BIBREF16"
},
{
"start": 1014,
"end": 1035,
"text": "Ristad (Ristad, 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "In the following sections, we show the features used in our experiments and the results. Then we describe some interesting statistics that we found in our experiments. Finally, we compare our work with some related systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "The features used in our experiments are listed in Tables 1 and 2. Each row in Table 1 contains a feature type, feature values, and an experimental result that will be explained later. Each feature consists of a type and a value. The features are basically some attributes of a bunsetsu itself or those between bunsetsus. We call them 'basic features.' The list is expanded from tIaruno's list (Haruno et al., 1998) . The features in the list are classified into five categories that are related to the \"Head\" part of the anterior bunsetsu (category \"a\"), the '~rype\" part of the anterior bunsetsu (category \"b\"), the \"Head\" part of the posterior bunsetsu (category \"c\"), the '~l~ype \" part of the posterior bunsetsu (category \"d\"), and the features between bunsetsus (category \"e\") respectively. The term \"Head\" basically means a rightmost content word in a bunsetsu, and the term \"Type\" basically means a function word following a \"Head\" word or an inflection type of a \"Head\" word. The terms are defined in the following paragraph. The features in Table 2 are combinations of basic features ('combined features'). They are represented by the corresponding category name of basic features, and each feature set is represented by the feature numbers of the corresponding basic features. They are classified into nine categories we constructed manually. For example, twin features are combinations of the features related to the categories %\" and \"c.\" Triplet, quadruplet and quintuplet features basically consist of the twin features plus the features of the remainder categories \"a,\" \"d\" and \"e.\" The total number of features is about 600,000. Among them, 40,893 were observed in the training corpus, and we used them in our experiment.",
"cite_spans": [
{
"start": 394,
"end": 415,
"text": "(Haruno et al., 1998)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1051,
"end": 1058,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The terms used in the table are the following: Anterior: left bunsetsu of the dependency Posterior: right bunsetsu of the dependency Head: the rightmost word in a bunsetsu other than those whose major part-of-speech 2 category is \"~ (special marks),\" \"1~ (postpositional particles),\" or \"~ (suffix)\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "2Part-of-speech categories follow those of JU-MAN (Kurohashi and Nagao, 1998) . Head-Lex: the fundamental form (uninflected form) of the head word. Only words with a frequency of three or more are used. Head-Inf: the inflection type of a head Type: the rightmost word other than those whose major part-of-speech category is \"~ (special marks).\" If the major category of the word is neither \"IIJJ~-~-] (post-positional particles)\" nor \"~[~:~. (suffix),\" and the word is inflectable 3, then the type is represented by the inflection type.",
"cite_spans": [
{
"start": 50,
"end": 77,
"text": "(Kurohashi and Nagao, 1998)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "JOStiIl: the rightmost post-positional particle in the bunsetsu JOSttI2: the second rightmost post-positional particle in the bunsetsu if there are two or more post-positional particles in the bunsetsu TOUTEN, WA: TOUTEN means if a comma (Touten) exists in the bunsetsu. WA means if the word WA (a topic marker) exists in the bunsetsu BW: BW means \"between bunsetsus\" BW-Distance: the distance between the bunsetsus BW-TOUTEN: if TOUTEN exists between bunsetsus BW-IDto-Anterior-Type:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "BW-IDto-Anterior-Type means if there is a bunsetsu whose type is identical to that of the anterior bunsetsu between bunsetsus BW-IDto-Anterior-Type-Head-P OS: the part-of-speech category of the head word of the bunsetsu of \"BW-IDto-Anterior-Type\" BW-IDto-Posterior-Head: if there is between bunsetsus a bunsetsu whose head is identical to that of the posterior bunsetsu BW-IDto-Posterior-Head-Type(String):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "the lexical information of the bunsetsu \"BW-IDto-Posterior-Head\" The results of our experiment are listed in Table 3. The dependency accuracy means the percentage of correct dependencies out of all dependencies. The sentence accuracy means the percentage of sentences in which all dependencies were analyzed correctly. We used input sentences that had already been morphologically analyzed and for which bunsetsus had been identified. The first line in Table 3 (deterministic) shows the accuracy achieved when the test sentences were analyzed deterministically (beam width k = 1). The second line in Table 3 (best beam search) shows the best accuracy among the experiments when changing the beam breadth k from 1 to 20. The best accuracy was achieved when k = 11, although the variation in accuracy was very small. This result supports assumption (4) in Chapter 1 because 3The inflection types follow those of JUMAN. The same values as those of feature number 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 600,
"end": 607,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 11.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 13.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 15.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "A 1 2The same values as those of feature number 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "[nilJ, [exist] (2)",
"cite_spans": [
{
"start": 7,
"end": 14,
"text": "[exist]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "The same values as those of feature number 8. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "3.1"
},
{
"text": "Combination type",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "86.75% (-o.39%)",
"sec_num": null
},
{
"text": "Twin features: related to the \"Type\" part of the anterior bunsetsu and the \"Head\" part of the posterior bunsetsu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "86.75% (-o.39%)",
"sec_num": null
},
{
"text": "Triplet features: basically consist of the twin features plus the features between bunsetsus. Quadruplet features: basically consist of the twin features plus the features related to the \"Head\" part of the anterior bunsetsu, and the \"Type\" part of the posterior bunsetsu. , 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42 it shows that the previous context has almost no effect on the accuracy. The last line in Table 3 represents the accuracy when we assumed that every bunsetsu depended on the next one (baseline). Figure 1 shows the relationship between the sentence length (the number of bunsetsus) and the dependency accuracy. The data for sentences longer than 28 segments are not shown, because there was at most one sentence of each length. Figure 1 shows that the accuracy degradation due to increasing sentence length is not significant. For the entire test corpus the average running time on a SUN Sparc Station 20 was 0.08 seconds per sentence.",
"cite_spans": [
{
"start": 272,
"end": 316,
"text": ", 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42",
"ref_id": null
}
],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 512,
"end": 520,
"text": "Figure 1",
"ref_id": "FIGREF4"
},
{
"start": 744,
"end": 752,
"text": "Figure 1",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "86.75% (-o.39%)",
"sec_num": null
},
{
"text": "This section describes how much each feature set contributes to improve the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "The rightmost column in Tables 1 and 2 shows the performance of the analysis without each feature set. In parenthesis, the percentage of improvement or degradation to the formal experiment is shown. In the experiments, when a basic feature was deleted, the combined features that included the basic feature were also deleted.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 38,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "We also conducted some experiments in which several types of features were deleted together. The results are shown in Table 4 . All of the results in the experiments were carried out deterministically (beam width k = 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "The results shown in Table 1 were very close to our expectation. The most useful features are the type of the anterior bunsetsu and the partof-speech tag of the head word on the posterior bunsetsu. Next important features are the distance between bunsetsus, the existence of punctuation in the bunsetsu, and the existence of brackets. These results indicate preferential rules with respect to the features.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "The accuracy obtained with the lexical features of the head word was better than that without them. In the experiment with the features, we found many idiomatic expressions, for example, \"~,, 15-C (oujile, according to)--b}~b (kimeru, decide)\" and \"~'~\" (katachi_de, in the form of)--~b~ (okonawareru, be held).\" We would expect to collect more of such expressions if we use more training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "The experiments without some combined features are reported in Tables 2 and 4 . As can be seen from the results, the combined features are very useful to improve the accuracy. We used these combined features in addition to the basic features because we thought that the basic features were actually related to each other. Without the combined features, the features are independent of each other in the maximum entropy framework.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 77,
"text": "Tables 2 and 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "We manually selected combined features, which are shown in Table 2 . If we had used all combi- ",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Features and Accuracy",
"sec_num": "3.2"
},
{
"text": "68.83% (-18.31%) nations, the number of combined features would have been very large, and the training would not have been completed on the available machine. Furthermore, we found that the accuracy decreased when several new features were added in our preliminary experiments. So, we should not use all combinations of the basic features. We selected the combined features based on our intuition.",
"cite_spans": [
{
"start": 7,
"end": 16,
"text": "(-18.31%)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "81.28% (-5.86%)",
"sec_num": null
},
{
"text": "In our future work, we believe some methods for automatic feature selection should be studied. One of the simplest ways of selecting features is to select features according to their frequencies in the training corpus. But using this method in our current experiments, the accuracy decreased in all of the experiments. Other methods that have been proposed are one based on using the gain (Berger et al., 1996) and an approximate method for selecting informative features (Shirai et al., 1998a) , and several criteria for feature selection were proposed and compared with other criteria (Berger and Printz, 1998) . We would like to try these methods.",
"cite_spans": [
{
"start": 389,
"end": 410,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF6"
},
{
"start": 472,
"end": 494,
"text": "(Shirai et al., 1998a)",
"ref_id": "BIBREF19"
},
{
"start": 587,
"end": 612,
"text": "(Berger and Printz, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "81.28% (-5.86%)",
"sec_num": null
},
{
"text": "Investigating the sentences which could not be analyzed correctly, we found that many of those sentences included coordinate structures. We believe that coordinate structures can be detected to a certain extent by considering new features which take a wide range of information into account. Figure 2 shows the relationship between the number of training data (the number of sentences) and the accuracy. This figure shows dependency accuracies for the training corpus and the test corpus. Accuracy of 81.84% was achieved even with a very small training set (250 sentences). We believe that this is due to the strong characteristic of the maximum entropy framework to the data sparseness problem. From the learning curve, we can expect a certain amount of improvement if we have more training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 2",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "81.28% (-5.86%)",
"sec_num": null
},
{
"text": "This section compares our work with related statistical dependency structure analyses in Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Works",
"sec_num": "3.4"
},
{
"text": "Comparison with Shirai's work (Shirai et al., 1998b) Shirai proposed a framework of statistical language modeling using several corpora: the EDR corpus, RWC corpus, and Kyoto University corpus. He combines a parser based on a hand-made CFG and a probabilistic dependency model. He also used the maximum entropy model to estimate the dependency probabilities between two or three post-positional particles and a verb. Accuracy of 84.34% was achieved using 500 test sentences of length 7 to 9 bunsetsus. In both his and our experiments, the input sentences were morphologically analyzed and their bunsetsus were identified. The comparison of the results cannot strictly be done because the conditions were different. However, it should be noted that the accuracy achieved by our model using sentences of the same length was about 3% higher than that of Shirai's model, although we used a much smaller set of training data. We believe that it is because his approach is based on a hand-made CFG.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Shirai et al., 1998b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Works",
"sec_num": "3.4"
},
{
"text": "Comparison with Ehara's work (Ehara, 1998) Ehara also used the Maximum Entropy model, and a set of similar kinds of features to ours. However, there is a big difference in the number of features between Ehara's model and ours. Besides the difference in the number of basic features, Ehara uses only the combination of two features, but we also use triplet, quadruplet, and quintuplet features. As shown in Section 3.2, the accuracy increased more than 5% using triplet or larger combinations. We believe that the difference in the combination features between Ehara's model and ours may have led to the difference in the accuracy. The accuracy of his system was about 10% lower than ours. Note that Ehara used TV news articles for training and testing, which are different from our corpus. The average sentence length in those articles was 17.8, much longer than that (average: 10.0) in the Kyoto University text corpus.",
"cite_spans": [
{
"start": 29,
"end": 42,
"text": "(Ehara, 1998)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Related Works",
"sec_num": "3.4"
},
{
"text": "Fujio's work (Fujio and Matsumoto, 1998) and Haruno's work (Haruno et al., 1998) Fujio used the Maximum Likelihood model with similar features to our model in his parser. Haruno proposed a parser that uses decision tree models and a boosting method. It is difficult to directly compare these models with ours because they use a different corpus, the EDR corpus which is ten times as large as our corpus, for training and testing, and the way of collecting test data is also different. But they reported an accuracy of around 85%, which is slightly worse than our model. We carried out two experiments using almost the same attributes as those used in their experiments. The results are shown in Table 5 , where the lines \"Feature set(l)\" and \"Feature set(2)\" show the accuracies achieved by using Fujio's attributes and Haruno's attributes respectively. Considering that both results are around 85% to 86%, which is about the same as ours. From these experiments, we believe that the important factor in the statistical approaches is not the model, i.e. Maximum Entropy, Maximum Likelihood, or Decision Tree, but the feature selection. However, it may be interesting to compare these models in terms of the number of training data, as we can imagine that some models are better at coping with the data sparseness problem than others. This is our future work.",
"cite_spans": [
{
"start": 13,
"end": 40,
"text": "(Fujio and Matsumoto, 1998)",
"ref_id": "BIBREF10"
},
{
"start": 45,
"end": 80,
"text": "Haruno's work (Haruno et al., 1998)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 695,
"end": 702,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Comparison with",
"sec_num": null
},
{
"text": "This paper described a Japanese dependency structure analysis based on the maximum entropy model. Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units. The probabilities of dependencies between bunsetsus are estimated by this model. The dependency accuracy of our system was 87.2% using the Kyoto University corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "In our experiments without the feature sets shown in Tables 1 and 2, we found that some basic and combined features strongly contribute to improve the accuracy. Investigating the relationship between the number of training data and the accuracy, we found that good accuracy can be achieved even with a very small set of training data. We believe that the maximum entropy framework has suitable characteristics for overcoming the data sparseness problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "There are several future directions. In particular, we are interested in how to deal with coordinate structures, since that seems to be the largest problem at the moment. 5, 9--12, 14, 15, 19, 20, 24--27, 29, 30, 34--43 .) Feature set (2) (Without features 4, 5, (9) (10) (11) (12) 19, 20, (24) (25) (26) (27) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) ",
"cite_spans": [
{
"start": 171,
"end": 219,
"text": "5, 9--12, 14, 15, 19, 20, 24--27, 29, 30, 34--43",
"ref_id": null
},
{
"start": 239,
"end": 259,
"text": "(Without features 4,",
"ref_id": null
},
{
"start": 260,
"end": 262,
"text": "5,",
"ref_id": null
},
{
"start": 263,
"end": 266,
"text": "(9)",
"ref_id": null
},
{
"start": 267,
"end": 271,
"text": "(10)",
"ref_id": null
},
{
"start": 272,
"end": 276,
"text": "(11)",
"ref_id": null
},
{
"start": 277,
"end": 281,
"text": "(12)",
"ref_id": null
},
{
"start": 282,
"end": 285,
"text": "19,",
"ref_id": null
},
{
"start": 286,
"end": 289,
"text": "20,",
"ref_id": null
},
{
"start": 290,
"end": 294,
"text": "(24)",
"ref_id": null
},
{
"start": 295,
"end": 299,
"text": "(25)",
"ref_id": null
},
{
"start": 300,
"end": 304,
"text": "(26)",
"ref_id": null
},
{
"start": 305,
"end": 309,
"text": "(27)",
"ref_id": null
},
{
"start": 310,
"end": 314,
"text": "(34)",
"ref_id": null
},
{
"start": 315,
"end": 319,
"text": "(35)",
"ref_id": null
},
{
"start": 320,
"end": 324,
"text": "(36)",
"ref_id": null
},
{
"start": 325,
"end": 329,
"text": "(37)",
"ref_id": null
},
{
"start": 330,
"end": 334,
"text": "(38)",
"ref_id": null
},
{
"start": 335,
"end": 339,
"text": "(39)",
"ref_id": null
},
{
"start": 340,
"end": 344,
"text": "(40)",
"ref_id": null
},
{
"start": 345,
"end": 349,
"text": "(41)",
"ref_id": null
},
{
"start": 350,
"end": 354,
"text": "(42)",
"ref_id": null
},
{
"start": 355,
"end": 359,
"text": "(43)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Maj or) Posterior-Head-POS (Minor) Posterior-Head-Inf(Maj or 7 Post erior-Head-Inf(Minor) Posterior-Type(String) Posterior-Type",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Posterior-Head-Lex Post erior-Head-P OS (Maj or) Posterior-Head-POS (Minor) Posterior-Head-Inf(Maj or 7 Post erior-Head-Inf(Minor) Posterior-Type(String) Posterior-Type(Major)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Minor~ Posterior-JOSHll(Strmg) Posterior-JOSHIl(Minor) Posterior-J OS HI2( St ring) Posterior-JOSHI 2(Minor)",
"authors": [
{
"first": "",
"middle": [],
"last": "Posterior-Type",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Posterior-Type(Minor~ Posterior-JOSHll(Strmg) Posterior-JOSHIl(Minor) Posterior-J OS HI2( St ring) Posterior-JOSHI 2(Minor)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Posterior-punct Uatlon Post erior-bracket-open Posterior-bracket-close BW-Dist ance BW-TOU'I'EIN BW-WA BW-brackets BW-IDt o-Ant erior-Type BW-IDto-Anterior-Type-Head-POS",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Posterior-punct Uatlon Post erior-bracket-open Posterior-bracket-close BW-Dist ance BW-TOU'I'EIN BW-WA BW-brackets BW-IDt o-Ant erior-Type BW-IDto-Anterior-Type- Head-POS(Major)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Posterior-Head BW-IDto-Posterior-Head-Type(String)",
"authors": [
{
"first": "",
"middle": [],
"last": "Bw-Idto",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BW-IDto-Posterior-Head BW-IDto-Posterior-Head- Type(String)",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A comparison of criteria for maximum entropy / minimum divergence feature selection",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Berger",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Printz",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of Third Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Berger and Harry Printz. 1998. A com- parison of criteria for maximum entropy / min- imum divergence feature selection. Proceedings of Third Conference on Empirical Methods in Natural Language Processing, pages 97-106.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum en- tropy approach to natural language processing. Computational Linguistics, 22(1):39-71.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "184--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. Proceed- ings of the 34th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 184-191.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Japanese bunsetsu dependency estimation using maximum entropy method",
"authors": [
{
"first": "Terumasa",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of The Fourth Annual",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terumasa Ehara. 1998. Japanese bunsetsu de- pendency estimation using maximum entropy method. Proceedings of The Fourth Annual",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Meeting of The Association for Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "382--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meeting of The Association for Natural Lan- guage Processing, pages 382-385. (in Japanese).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Japanese dependency structure analysis based on lexicalized statistics",
"authors": [
{
"first": "Masakazu",
"middle": [],
"last": "Fujio",
"suffix": ""
},
{
"first": "Yuuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of Third Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "87--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masakazu Fujio and Yuuji Matsumoto. 1998. Japanese dependency structure analysis based on lexicalized statistics. Proceedings of Third Conference on Empirical Methods in Natural Language Processing, pages 87-96.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A deterministic parser based on karari-uke grammar",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Fujita",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "399--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhiko Fujita. 1988. A deterministic parser based on karari-uke grammar, pages 399-402.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using decision trees to construct a practical parser",
"authors": [
{
"first": "Masahiko",
"middle": [],
"last": "Haruno",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shiral",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Ooyama",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the COLING-ACL '98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiko Haruno, Satoshi Shiral, and Yoshifumi Ooyama. 1998. Using decision trees to con- struct a practical parser. Proceedings of the COLING-ACL '98.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Error-driven model learning of Japanese morphological analysis",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Kitauchi",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1998,
"venue": "IPSJ-WGNL",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akira Kitauchi, Takehito Utsuro, and Yuji Mat- sumoto. 1998. Error-driven model learning of Japanese morphological analysis. IPSJ- WGNL, NL124-6:41--48. (in Japanese).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Kyoto university text corpus project",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "115--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1997. Ky- oto university text corpus project, pages 115- 118. (in Japanese).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Japanese Morphological Analysis System JU-MAN version 3.5",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao, 1998. Japanese Morphological Analysis System JU- MAN version 3.5. Department of Informatics, Kyoto University.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Machine learning approach to bunsetsu identification --comparison of decision tree, maximum entropy model, example-based approach, and a new method using category-exclusive rules",
"authors": [
{
"first": "Masaki",
"middle": [],
"last": "Murata",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 1998,
"venue": "IPSJ-WGNL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaki Murata, Kiyotaka Uchimoto, Qing Ma, and Hitoshi Isahara. 1998. Machine learning approach to bunsetsu identification --compar- ison of decision tree, maximum entropy model, example-based approach, and a new method us- ing category-exclusive rules --. IPSJ-WGNL, NL128-4:23-30. (in Japanese).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A linear observed time statistical parser based on maximum entropy models",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1997,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum en- tropy models. Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maximum entropy modeling toolkit, release 1.6 beta",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Sven Ristad",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Sven Ristad. 1998. Maximum en- tropy modeling toolkit, release 1.6 beta. http ://www.mnemonic.com/software/memt.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning dependencies between case frames using maximum entropy method",
"authors": [
{
"first": "Kiyoaki",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
},
{
"first": "I-Iozumi",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "356--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoaki Shirai, Kentaro Inui, Takenobu Toku- naga, and I-Iozumi Tanaka. 1998a. Learning dependencies between case frames using max- imum entropy method, pages 356-359. (in Japanese).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A framework of integrating syntactic and lexical statistics in statistical parsing",
"authors": [
{
"first": "Kiyoaki",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
},
{
"first": "Hozumi",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 1998,
"venue": "Journal of Natural Language Processing",
"volume": "5",
"issue": "3",
"pages": "85--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoaki Shirai, Kentaro Inui, Takenobu Toku- naga, and Hozumi Tanaka. 1998b. A frame- work of integrating syntactic and lexical statis- tics in statistical parsing. Journal of Nat- ural Language Processing, 5(3):85-106. Japanese).",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "(common noun), ~ (quantifier) .... (24) ~j[t]~ (vowel verb) .... (307 ~(stem), ~r~ (fundamental form) .... (6O) ~, ~ a, ~c L-C, ~, &, tO, t ...."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "b2) = {(9, 11), (I0, 12)}, d = {21,22,23} quadruplet features plus the (a, b, c, d, e) (a, c) = {(1, 16), (2, 17), (3, 18)}, features between bunsetsus. (b, d) = {(6, 21), (7, 22), (8, 23}, e = 31"
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Relationship between the number of bunsetsus in a sentence and dependency accuracy."
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Relationship between the number of training data and the parsing accuracy. (beam breadth k=l)"
},
"TABREF1": {
"type_str": "table",
"text": "Features (basic features) Basic features (5 categories, 43 types) [ \u2022 Feature values ... (Number of values)",
"num": null,
"html": null,
"content": "<table><tr><td>Accuracy without I each feature</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "Features (combined features)",
"num": null,
"html": null,
"content": "<table><tr><td>Combined features (9 categories, 134 types)</td></tr><tr><td>Combinations</td></tr><tr><td>Category</td></tr><tr><td>(b, c)</td></tr><tr><td>(bx, b2, c)</td></tr><tr><td>(b, c, e)</td></tr><tr><td>(dl, d2, e)</td></tr><tr><td>(bl, b2, c, d)</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "Results of dependency analysis",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"3\">Dependency accuracy</td><td colspan=\"2\">Sentence accuracy</td></tr><tr><td colspan=\"2\">Deterministic (k = 1)</td><td colspan=\"3\">87.14%(9814/11263)</td><td colspan=\"2\">40.60% (503/1239)</td></tr><tr><td colspan=\"2\">Best beam search(k = 11)</td><td colspan=\"3\">87.21%(9822/11263)</td><td colspan=\"2\">40.60% (503/1239)</td></tr><tr><td>Baseline</td><td/><td colspan=\"3\">64.09%(7219/11263)</td><td colspan=\"2\">6.38% (79/1239)</td></tr><tr><td>1.0</td><td/><td/><td/><td/><td/></tr><tr><td>0.8714 .......</td><td colspan=\"4\">--~--</td><td/></tr><tr><td>0.8</td><td/><td/><td/><td/><td/></tr><tr><td>0.6</td><td/><td/><td/><td/><td/></tr><tr><td>Dependency accuracy</td><td/><td/><td/><td/><td/></tr><tr><td>0.4</td><td/><td/><td/><td/><td/></tr><tr><td>0.2</td><td/><td/><td/><td/><td/></tr><tr><td/><td>,</td><td>,</td><td>i</td><td>i</td><td>I</td><td>i</td></tr><tr><td/><td/><td>10</td><td/><td>20</td><td/><td>30</td></tr><tr><td/><td colspan=\"5\">Number of bunsetsus in a sentence</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"text": "Accuracy without several types of features Features Without features 1 and 16 (lexical information about the head word)",
"num": null,
"html": null,
"content": "<table><tr><td/><td>Accuracy</td></tr><tr><td/><td>86.30% (-0.84%)</td></tr><tr><td>Without features 35 to 43</td><td>86.83% (-0.31%)</td></tr><tr><td>Without quadruplet and quintuplet features</td><td>84.27% (-2.87%)</td></tr><tr><td>Without triplet, quadruplet, and quintuplet features</td><td/></tr><tr><td>Without all combinations</td><td/></tr></table>"
},
"TABREF7": {
"type_str": "table",
"text": "Simulation of Fujio's and Haruno's experiments",
"num": null,
"html": null,
"content": "<table><tr><td>Feature set</td></tr><tr><td>Feature set (1)</td></tr><tr><td>(Without features 4,</td></tr></table>"
}
}
}
}