Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C00-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:31:32.019844Z"
},
"title": "Reusing an ontology to generate numeral classifiers",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kyonghee",
"middle": [],
"last": "Paik",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a solution to the problem of generating Japanese numeral classifiers using semantic classes from an ontology. Most nouns must take a numeral classifier when they are quantified in languages such as Chinese, Japanese, Korean, Malay and Thai. In order to select an appropriate classifier, we propose an algorithm which associates classifiers with semantic classes and uses inheritance to list only those classifiers which have to be listed. It generates sortal classifiers with an accuracy of 81%. We reuse the ontology provided by Goi-Taikei-a Japanese lexicon, and show that it is a reasonable choice for this task, requiring information to be entered for less than 6% of individual nouns.",
"pdf_parse": {
"paper_id": "C00-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a solution to the problem of generating Japanese numeral classifiers using semantic classes from an ontology. Most nouns must take a numeral classifier when they are quantified in languages such as Chinese, Japanese, Korean, Malay and Thai. In order to select an appropriate classifier, we propose an algorithm which associates classifiers with semantic classes and uses inheritance to list only those classifiers which have to be listed. It generates sortal classifiers with an accuracy of 81%. We reuse the ontology provided by Goi-Taikei-a Japanese lexicon, and show that it is a reasonable choice for this task, requiring information to be entered for less than 6% of individual nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper we consider two questions. The first is: how to generate numeral classifiers such as piece in 2 pieces of paper? To do this we use a semantic hierarchy originally developed for a different task. The second is: how far can such a hierarchy be reused?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In English, uncountable nouns cannot be directly modified by numerals, instead the noun must be embedded in a noun phrase headed by a classifier. Knowing when to do this is a language specific property. For example, French deux renseignement must be translated as two pieces of information in English. 1 In many languages, including most South-East Asian languages, Chinese, Japanese and Korean, the majority of nouns are uncountable and must be quantified by numeral classifier combinations. These languages typically have many different classifiers. There has been some work on the analysis of numeral classifiers in natural language processing, particularly for Japanese (Asahioka et al., 1990; Kamei and Muraki, 1995; Bond et al., 1996; Bond et al., 1998; Yokoyama and Ochiai, 1999) , but very little on their generation. We could only find one paper on generating classifiers in Thai (Sornlertlamvanich et al., 1994) . One immediate application for the generation of classifiers is machine translation, and we shall take examples from there, but it is in fact needed for the generation of any quantified noun phrase with an uncountable head noun.",
"cite_spans": [
{
"start": 302,
"end": 303,
"text": "1",
"ref_id": null
},
{
"start": 674,
"end": 697,
"text": "(Asahioka et al., 1990;",
"ref_id": "BIBREF0"
},
{
"start": 698,
"end": 721,
"text": "Kamei and Muraki, 1995;",
"ref_id": "BIBREF8"
},
{
"start": 722,
"end": 740,
"text": "Bond et al., 1996;",
"ref_id": "BIBREF3"
},
{
"start": 741,
"end": 759,
"text": "Bond et al., 1998;",
"ref_id": "BIBREF4"
},
{
"start": 760,
"end": 786,
"text": "Yokoyama and Ochiai, 1999)",
"ref_id": "BIBREF13"
},
{
"start": 889,
"end": 921,
"text": "(Sornlertlamvanich et al., 1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second question we address is: how far can an ontology be reused for a different task to the one it was originally designed for. There are several large ontologies now in use (WordNet (Fellbaum, 1998) ; Goi-Taikei (Ikehara et al., 1997) ; Mikrokosmos (Nirenburg, 1989) ) and it is impractical to rebuild one for every application. However, there is no guarantee that an ontology built for one task will be useful for another.",
"cite_spans": [
{
"start": 179,
"end": 204,
"text": "(WordNet (Fellbaum, 1998)",
"ref_id": null
},
{
"start": 218,
"end": 240,
"text": "(Ikehara et al., 1997)",
"ref_id": "BIBREF7"
},
{
"start": 255,
"end": 272,
"text": "(Nirenburg, 1989)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. In Section 2, we discuss the properties of numeral classifiers in more detail and suggest an improved algorithm for generating them. Section 3 introduces the ontology we have chosen, the Goi-Taikei ontology (Ikehara et al., 1997 ). Then we show how to use the ontology to generate classifiers in Section 4. Finally, we discuss how well it performs in Section 5.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Ikehara et al., 1997",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we introduce the properties of numeral classifiers, focusing on Japanese, then give an algorithm to generate classifiers. Japanese was chosen because of the wealth of published data on Japanese classifiers and the availability of a large lexicon with semantic classes marked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Numeral Classifiers",
"sec_num": "2"
},
{
"text": "Japanese is a language where most nouns can not be directly modified by numerals. Instead, nouns are modified by a numeral-classifier combination as shown in (1). 2 (1) 2-ts\u016b-no 2-CL-ADN denshim\u0113ru email 2 pieces of email",
"cite_spans": [
{
"start": 163,
"end": 164,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "What are Numeral Classifiers",
"sec_num": "2.1"
},
{
"text": "In Japanese, numeral classifiers are a subclass of nouns. The main property distinguishing them from prototypical nouns is that they cannot stand alone. Typically they postfix to numerals, forming a quantifier phrase. Japanese also allows them to combine with the quantifier s\u016b \"some\" or the interrogative nani \"what\" (2). We will call all such combinations of a numeral/quantifier/interrogative with a numeral classifier a numeral-classifier combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "(2) a. 2-hiki \"2 animals\" (Numeral) b. s\u016b-hiki \"some animals\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "c. nan-biki \"how many animals\" (Interrogative)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Classifiers have different properties depending on their use. There are five major types: sortal which classify the kind of the noun phrase they quantify (such as -tsu \"piece\"); event which are used to quantify events (such as -kai \"time\"); mensural which are used to measure the amount of some property (such as senchi \"-cm\"), group which refer to a collection of members (such as -mure \"group\"); and taxonomic which force the noun phrase to be interpreted as a generic kind (such as -shu \"kind\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "We propose the following basic structure for sortal classifiers (3). The lexical structure we adopt is an extension of Pustejovsky's (1995) generative lexicon, with the addition of an explicit quantification relationship (Bond and Paik, 1997) .",
"cite_spans": [
{
"start": 119,
"end": 139,
"text": "Pustejovsky's (1995)",
"ref_id": "BIBREF10"
},
{
"start": 221,
"end": 242,
"text": "(Bond and Paik, 1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "(3) classifier 2 6 6 4 ARGSTR \" ARG1 x:numeral+ D-ARG1 y: ? # QUANT quantifies(x,y) 3 7 7 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "There are two variables in the argument structure: the numeral, quantifier or interrogative (represented by numeral+), and the noun phrase being classified. Because the noun phrase being classified can be omitted in context, it is a default argument, one which participates in the logical expressions in the qualia, but is not necessarily expressed syntactically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Sortal classifiers differ from each other in the restrictions they place on the quantified variable y. For example the classifier -nin adds the restriction y:human. That is, it can only be used to classify human referents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Japanese has two number systems: a Sino-Japanese one based on Chinese for example, ichi \"one\",ni \"two\",san \"three\", etc., and an alternative native-Japanese system, for example, hitotsu \"one\" futatsu \"two\",mitsu \"three\", etc. In Japanese the native system only exists for the numbers from one to ten. Most classifiers combine with the Chinese forms, however, different classifiers select Sino-Japanese for some numerals, for example, ni-hiki \"two-cl\", and most classifiers undergo some form of sound change (such as -hiki to -biki in (2)). We will not be concerned with these morphological changes, we refer interested readers to Backhouse (1993, 118-122) for more discussion.",
"cite_spans": [
{
"start": 630,
"end": 655,
"text": "Backhouse (1993, 118-122)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Numeral classifiers characteristically premodify the noun phrases they quantify, linked by an adnominal case marker, as in (4); or appear 'floating' as adverbial phrases, typically to before the verb: (5). The choice between pre-nominal and floating quantifiers is largely driven by discourse related considerations (Downing, 1996) . In this paper we concentrate on the semantic contribution of the quantifiers, and ignore the discourse effects. In the pre-nominal construction the relation between the target noun phrase and quantifier is explicit. For numeral-classifier combinations the quantification can be of the object denoted by the noun phrase itself as in (8); or of a sub-part of it as in (9) (see Bond and Paik (1997) Sornlertlamvanich et al. (1994) . They propose to generate classifiers in Thai as follows: First create a lexicon with default classifiers listed for as many nouns as possible. This was done by automatically extracting noun classifier pairs from a sense-tagged corpus, and taking the classifier that appeared most often with each sense of a noun. 3 Then, the most frequent classifier is listed for each semantic class. Generation is then simple: if a noun has a default classifier in the lexicon, then use it, otherwise use the default classifier associated with its semantic class.",
"cite_spans": [
{
"start": 316,
"end": 331,
"text": "(Downing, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 709,
"end": 729,
"text": "Bond and Paik (1997)",
"ref_id": "BIBREF2"
},
{
"start": 730,
"end": 761,
"text": "Sornlertlamvanich et al. (1994)",
"ref_id": "BIBREF12"
},
{
"start": 1077,
"end": 1078,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Unfortunately, no detailed results were given as to the size of the concept hierarchy, the number of nodes in it or the number of nouns for which classifiers were found. As the generation procedure was not implemented, there was no overall accuracy given for the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "As a default, Sornlertlamvanich et al.'s algorithm is useful. However, it does not cover several exceptional cases, so we have refined it further. The extended algorithm is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 182,
"end": 190,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Firstly, we have made explicit what to do when a noun is a member of more than one semantic class or of no semantic class. In the lexicon we used, nouns are, on average, members of 2 semantic classes. However, the semantic classes are ordered so that the most typical use comes first. For example, usagi \"rabbit\" is marked as both animal and meat, with animal coming first ( Figure 3 ). In this case, we would take the classifier associated with the first semantic class. However, in the case of usagi it is not counted with the default classifier for animals -hiki, but with that for birds -wa, this must be listed as an exception.",
"cite_spans": [],
"ref_spans": [
{
"start": 375,
"end": 383,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "Secondly, we have added a method for generating classifiers that quantify coordinate noun phrases. These commonly appear in appositive noun phrases such as ABC-to XYC-no 2-sha \"the two companies, ABC and XYZ\". In addition, we investigate to what degree we could use inheritance to remove redundancy from the lexicon. If a noun's default classifier is the same as the default classifier for its semantic class, then there is no need to list it in the lexicon. This makes the lexicon smaller and it is easier to add new entries. Any display of the lexical item (such as for maintenance or if the lexicon is used as a human aid), should automatically generate the classifier from the semantic class. Alternatively (and equivalently), in a lexicon with multiple inheritance and defaults, the class's default classifier can be added as a defeasible constraint on all members of the semantic class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "emails",
"sec_num": "2"
},
{
"text": "We used the ontology provided by Goi-Taikei -A Japanese Lexicon (Ikehara et al., 1997) . We choose it because of its rich ontology, its extensive use in many other NLP applications, its wide coverage of Japanese, and the fact that it is being extended to other numeral classifier languages, such as Malay.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Ikehara et al., 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Goi-Taikei Ontology",
"sec_num": "3"
},
{
"text": "The ontology has several hierarchies of concepts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Goi-Taikei Ontology",
"sec_num": "3"
},
{
"text": "with both is-a and has-a relationships. 2,710 semantic classes (12-level tree structure) for common nouns, 200 classes (9-level tree structure) for proper nouns and 108 classes for predicates. We show the top three levels of the common noun ontology in Figure 2 . Words can be assigned to semantic classes anywhere in the hierarchy. Not all semantic classes have words assigned to them. The semantic classes are used in the Japanese word semantic dictionary to classify nouns, verbs and adjectives. The dictionary includes 100,000 common nouns, 70,000 technical terms, 200,000 proper nouns and 30,000 other words: 400,000 words in all. The semantic classes are also used as selectional restrictions on the arguments of predicates in a separate predicate dictionary, with around 17,000 entries. Figure 3 shows an example of one record of the Japanese semantic word dictionary, with the addition of the new DEFAULT CLASSIFIER field (underlined for emphasis).",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 794,
"end": 802,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Goi-Taikei Ontology",
"sec_num": "3"
},
{
"text": "Each record has an index form, pronunciation, a canonical form, part-of-speech and semantic classes. Each word can have up to five common noun classes and ten proper noun classes. In the case of usagi \"rabbit\", there are two common noun classes and no proper noun classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Goi-Taikei Ontology",
"sec_num": "3"
},
{
"text": "In this section we investigate how far the semantic classes can be used to predict default classifiers for nouns. Because most sortal classifiers select for some kind of semantic class, we thought that nouns grouped together under the same semantic class should share the same classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "We associated classifiers with semantic classes by hand. This took around two weeks. We found that, while some classes were covered by a single classifier, around 20% required more than one. For example, 1056:song is counted only by -kyoku \"tune\", and 989:water vehicle by only byseki \"ship\", but the class [961:weapon] had members counted by -hon \"long thin\", -ch\u014d \"knife\", -furi \"swords\", -ki \"machines\" and more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "We show the most frequent numeral classifiers in Table 1 . We ended up with 47 classifiers used as semantic classes' default classifiers. This is in line with the fact that most speakers of Japanese know and use between 30 and 80 sortal classifiers (Downing, 1996) . Of course, we expect to add more classifiers at the noun level. 801 semantic classes turned out not to have classifiers. This included classes with no words associated with them, and those that only contained nouns with referents so abstract we considered them to be uncountable, such as greed, lethargy, etc.",
"cite_spans": [
{
"start": 249,
"end": 264,
"text": "(Downing, 1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "We used the default classifiers assigned to the semantic classes to generate defeasible defaults for the noun entries in the common and technical term dictionaries (172,506 words in all). We did this in order to look at the distribution of classifiers over words in the lexicon. In the actual generation this would be done dynamically, after the semantic classes have been disambiguated. The distributions of classifiers were similar to those of the semantic classes, although there was a higher proportion counted with the residual classifier -tsu, and the classifier for machines -dai. This may be an artifact of the 70,000 word technical term dictionary. As further research, we would like to calculate the distribution of classifiers in some text, although we expect it to depend greatly on the genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "The mapping we created is not complete because some of the semantic classes have nouns which do not share the same classifiers. We have to add more specific defaults at the noun level. As well as more specific sortal classifiers, there are cases where a group classifier may be more appropriate. For example, among the nouns counted with -nin there are entries such as couple, twins and so on which are often counted with -kumi \"pair\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "In addition, the choice of classifier can depend on factors other than just semantic class, for example, hito \"people\" can be counted by either -nin or -mei, the only difference being that -mei is more polite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "It was difficult to assign default classifiers to the semantic classes that referred to events. These classes mainly include deverbal nouns (e.g. konomi \"liking\") and nominal verbs (e.g., benky\u014d \"study\"). These can stand for both the action or the result of the action: e.g. kenky\u016b \"a study/research\". In these cases, every application we considered would distinguish between event and sortal classification in the input, so it was only necessary to choose a classifier for the result of the action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Classifiers to the Ontology",
"sec_num": "4"
},
{
"text": "The algorithm was tested on a 3700 sentence machine translation test set of Japanese with English translations, although we only used the Japanese. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "DGE (usagi) PRONUNCIATION (0, /usagi/ CANONICAL FORM x (usagi) PART OF SPEECH noun DEFAULT CLASSIFIER Q (-wa) SEMANTIC CLASSES \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "COMMON NOUN 537:beast 843:meat/egg # 3 7 7 7 7 7 7 7 7 7 7 7 7 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "Figure 3: Japanese Lexical Entry for rabbit \"usagi\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "We only considered sentences with a noun phrase modified by a sortal classifier. Noun phrases modified by group classifiers, such as -soku \"pair\" were not evaluated, as we reasoned that the presence of such a classifier would be marked in the input to the generator. We also did not consider the anaphoric use of numeral classifiers. Although there were many anaphoric examples, resolving them requires robust anaphor resolution, which is a separate problem. We estimate that we would achieve the same accuracy with the anaphoric examples if their referents were known, unfortunately the test set did not always include the full context, so we could not identify the referents and test this. A typical example of anaphoric use is (10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "(10) shukka-ga shipment-NOM ruiseki-de cumulative 500-hon-wo 500-CL-ACC toppa-shita reached Cumulative shipments reached 500 ?barrels/rolls/logs/. . . In total, there were 90 noun phrases modified by a sortal classifier. Our test of the algorithm was done by hand, as we have no Japanese generator. We assumed as input only the fact that a classifier was required, and the semantic classes of the head noun given in the lexicon. Using only the default classifiers predicted by the semantic class, we were able to generate 73 (81%) correctly. A classifier was only judged to be correct if it was exactly the same as that in the original test set. This was almost double the base line of generating the most common classifier (-nin) for all noun phrases, which would have achieved 41%. The results, with a breakdown of the errors, are summarized in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 847,
"end": 854,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "In this small sample, 6 out of 90 (6.7%) of noun phrases needed to have the default classifier marked for the noun. In fact, there were only 4 different nouns, as two were repeated. We therefore estimate that fewer than 6% of nouns will need to have their own default classifier marked. Had the default classifier for these nouns been marked in the lexicon, our accuracy would have been 88%, the maximum achievable for our method. Table 2 : Results of applying the algorithm Looking at it from another point of view, the Goi-Taikei ontology, although initially designed for Japanese analysis, was also useful for generating Japanese numeral classifiers. We consider that it would be equally useful for the same task with Korean, or even the unrelated language Malay.",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "We generated the residual classifier -tsu for nouns not in the lexicon, this proved to be a bad choice for three unknown words. If we had a method of deducing semantic classes for unknown words we could have used it to predict the classifier more successfully. For example, kikan-t\u014dshika \"institutional investor\" 5 was not in the dictionary, and so we used the semantic class for t\u014dshika \"investor\", which was 175:investor, a sub-type of 5:person. Had kikan-t\u014dshika \"institutional investor\" been marked as a subtype of company, or if we had deduced the semantic class from the modifier, then we would have been able to gener-5 Institutional investors are financial institutions that invest savings of individuals and non-financial companies in the financial markets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "ate the correct classifier -sha. In one case, we felt the default ordering of the semantic classes should have been reversed: 673:tree was listed before 854:edible fruit for ringo \"apple\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "The remaining errors were more problematic. There was one example, 80,000-nin-amari-no sh\u014dmei \"about 80,000 signatures\", which could be treated as referent transfer: shomei \"signature\" was being counted with the classifier for people. Another possible analysis is that the classifier is the head of a referential noun phrase with deictic/anaphoric reference, equivalent to the signatures of about 80,000 people. A couple were quite literary in style: for example 10nen-no toshi \"10 years (Lit: 10 years of years)\", where the toshi \"year\" part is redundant, and would not normally be used. In two of the errors the residual classifier was used instead of the more specific default. Shimojo (1997) predicts that this will happen in expressions where the amount is being emphasized more than what is being counted. Intuitively, this applied in both cases, but we were unable to identify any features we could exploit to make this judgment automatically.",
"cite_spans": [
{
"start": 681,
"end": 695,
"text": "Shimojo (1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "A more advanced semantic analysis may be able to dynamically determine the appropriate semantic class for cases of referent transfer, unknown words, or words whose semantic class can be restricted by context. Our algorithm, which ideally generates the classifier from this dynamically determined semantic class allows us to generate the correct classifier in context, whereas using a default listed for a noun does not. This was our original motivation for generating classifiers from semantic classes, rather than using a classifier listed with each noun as Sornlert-lamvanich et al. (1994) do.",
"cite_spans": [
{
"start": 559,
"end": 591,
"text": "Sornlert-lamvanich et al. (1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "In this paper we have concentrated on solving the problem of generating appropriate Japanese numeral classifiers using an ontology. In future work, we would like to investigate in more detail the conditions under which a classifier needs to be generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we presented an algorithm to generate Japanese numeral classifiers. It was shown to select the correct sortal classifier 81% of the time. The algorithm uses the ontology provided by Goi-Taikei, a Japanese lexicon, and shows how accurately semantic classes can predict numeral classifiers for the nouns they subsume. We also show how we can improve the accuracy and efficiency further through solving other natural language processing problems, in particular, referent transfer, anaphor resolution and word sense disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Visiting CSLI, Stanford University (1999-2000. 1 Numeral-classifier combinations are shown in bold, the noun phrases they quantify are underlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the following abbreviations: NOM = nominative; ACC = accusative; ADN = adnominal; CL = classifier; ARGSTR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "= argument structure; ARG = argument; D-ARG = default argument, QUANT = quantification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, Thai also has a great many group classifiers, much like herd, flock and pack in English. Therefore each noun has two classifiers, a sortal classifier and a group classifier listed. Japanese does not, so we will not discuss the generation of group classifiers here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The test set is available at www.kecl.ntt.co.jp/ icl/mtg/resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Kentaro Ogura, Timothy Baldwin, Virach Sornlertlamvanich and the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic classification and an analyzing system of Japanese numerical expressions",
"authors": [
{
"first": "Yoshimi",
"middle": [],
"last": "Asahioka",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Hirakawa",
"suffix": ""
},
{
"first": "Shin-Ya",
"middle": [],
"last": "Amano",
"suffix": ""
}
],
"year": 1990,
"venue": "IPSJ SIG Notes",
"volume": "90",
"issue": "64",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimi Asahioka, Hideki Hirakawa, and Shin-ya Amano. 1990. Semantic classification and an analyzing system of Japanese numerical expres- sions. IPSJ SIG Notes 90-NL-78, 90(64):129- 136, July. (in Japanese).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Japanese Language: An Introduction",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Backhouse",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. E. Backhouse. 1993. The Japanese Language: An Introduction. Oxford University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Classifying correspondence in Japanese and Korean",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Kyonghee",
"middle": [],
"last": "Paik",
"suffix": ""
}
],
"year": 1997,
"venue": "3rd Pacific Association for Computational Linguistics Conference: PACLING-97",
"volume": "",
"issue": "",
"pages": "58--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond and Kyonghee Paik. 1997. Classify- ing correspondence in Japanese and Korean. In 3rd Pacific Association for Computational Lin- guistics Conference: PACLING-97, pages 58-67. Meisei University, Tokyo, Japan.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classifiers in Japanese-to-English machine translation",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Satoru",
"middle": [],
"last": "Ikehara",
"suffix": ""
}
],
"year": 1996,
"venue": "16th International Conference on Computational Linguistics: COLING-96",
"volume": "",
"issue": "",
"pages": "125--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond, Kentaro Ogura, and Satoru Ikehara. 1996. Classifiers in Japanese-to-English machine translation. In 16th International Conference on Computational Linguistics: COLING-96, pages 125-130, Copenhagen, August. (http:// xxx.lanl.gov/abs/cmp-lg/9608014).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Anchoring floating quantifiers in Japaneseto-English machine translation",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Kurz",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shirai",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics: COLING/ACL-98",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond, Daniela Kurz, and Satoshi Shirai. 1998. Anchoring floating quantifiers in Japanese- to-English machine translation. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics: COLING/ACL- 98, pages 152-159, Montreal, Canada.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Numeral Classifier Systems, the case of Japanese. John Benjamins, Amsterdam",
"authors": [
{
"first": "Pamela",
"middle": [],
"last": "Downing",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pamela Downing. 1996. Numeral Classifier Sys- tems, the case of Japanese. John Benjamins, Am- sterdam.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Goi-Taikei -A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 volumes/CDROM",
"authors": [
{
"first": "Satoru",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Akio",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "Hiromi",
"middle": [],
"last": "Nakaiwa",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Ooyama",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Ooyama, and Yoshihiko Hayashi. 1997. Goi-Taikei -A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 volumes/CDROM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An analysis of NP-like quantifiers in Japanese",
"authors": [
{
"first": "Kazunori",
"middle": [],
"last": "Shin-Ichiro Kamei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muraki",
"suffix": ""
}
],
"year": 1995,
"venue": "First Natural Language Processing Pacific Rim Symposium: NLPRS-95",
"volume": "1",
"issue": "",
"pages": "163--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shin-ichiro Kamei and Kazunori Muraki. 1995. An analysis of NP-like quantifiers in Japanese. In First Natural Language Processing Pacific Rim Symposium: NLPRS-95, volume 1, pages 163- 167.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "KBMT-89 -a knowledge-based MT project at Carnegie Mellon University",
"authors": [
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "16--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergei Nirenburg. 1989. KBMT-89 -a knowledge-based MT project at Carnegie Mellon University. pages 141-147, Aug. 16-18.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 1995. The Generative Lexicon. MIT Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The role of the general category in the maintenance of numeral classifier systems: The case of tsu and ko in Japanese",
"authors": [
{
"first": "Mitsuaki",
"middle": [],
"last": "Shimojo",
"suffix": ""
}
],
"year": 1997,
"venue": "Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitsuaki Shimojo. 1997. The role of the general category in the maintenance of numeral classi- fier systems: The case of tsu and ko in Japanese. Linguistics, 35(4). (http://ifrm.glocom. ac.jp/doc/s01.001.html).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Classifier assignment by corpus-based approach",
"authors": [
{
"first": "Virach",
"middle": [],
"last": "Sornlertlamvanich",
"suffix": ""
},
{
"first": "Wantanee",
"middle": [],
"last": "Pantachat",
"suffix": ""
},
{
"first": "Surapant",
"middle": [],
"last": "Meknavin",
"suffix": ""
}
],
"year": 1994,
"venue": "15th International Conference on Computational Linguistics: COLING-94",
"volume": "",
"issue": "",
"pages": "556--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Virach Sornlertlamvanich, Wantanee Pantachat, and Surapant Meknavin. 1994. Classifier assignment by corpus-based approach. In 15th International Conference on Computa- tional Linguistics: COLING-94, pages 556- 561, August. (http://xxx.lanl.gov/ abs/cmp-lg/9411027).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Aimai-na s\u016bry\u014dshi-o fukumu meishiku-no kaisekih\u014d [a method for analysing noun phrases with ambiguous quantifiers",
"authors": [
{
"first": "Shoichi",
"middle": [],
"last": "Yokoyama",
"suffix": ""
},
{
"first": "Takeru",
"middle": [],
"last": "Ochiai",
"suffix": ""
}
],
"year": 1999,
"venue": "X=OpjB17eid<mk BtN2@!z|CHC\\^Bc? s`^ } ^ W`^{=r<d*Rb8 i B2P;eidB92@!aT=}^< eid<l1'mk<P }^^rY<V\" <Opj;wn:eidBJMF035!4<",
"volume": "",
"issue": "",
"pages": "550--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shoichi Yokoyama and Takeru Ochiai. 1999. Aimai-na s\u016bry\u014dshi-o fukumu meishiku-no kaisekih\u014d [a method for analysing noun phrases with ambiguous quantifiers.]. In 5th Annual Meeting of the Association for Natural Language Processing, pages 550-553. The Association for Natural Language Processing. (in Japanese). [X=OpjB17eid<mk BtN2@!z|CHC\\^Bc? s`^ } ^ W`^{=r<d*Rb8 i B2P;eidB92@!aT=}^< eid<l1'mk<P }^^rY<V\" <Opj;wn:eidBJMF035!4<",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "ZS &$#>8l1-mk2@/9*8+5! >5 %#uy<d=",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ZS &$#>8l1-mk2@/9*8+5! >5 %#uy<d=]<LIK8gB",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "also function as noun phrases on their own, with anaphoric or deictic reference, when what is being quantified is recoverable from the context. For example (7) is acceptable if the letters have already been referred to, or are clearly visible.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "1. For a simple noun phrase (a) If the head noun has a default classifier in the lexicon: use the noun's default classifier (b) Else if it exists, use the default classifier of the head noun's first listed semantic class (the class's default classifier) (c) Else use the residual classifier -tsu 2. For a coordinate noun phrase generate the classifier for each noun phrase use the most frequent classifier Algorithm to generate numeral classifiers",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Top three levels of the Goi-Taikei Common Noun Ontology",
"num": null
}
}
}
}