|
{ |
|
"paper_id": "E99-1028", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:37:27.469045Z" |
|
}, |
|
"title": "Word Sense Disambiguation in Untagged Text based on Term Weight Learning", |
|
"authors": [ |
|
{ |
|
"first": "Fumiyo", |
|
"middle": [], |
|
"last": "Fukumoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yamanashi University", |
|
"location": { |
|
"postCode": "4-3-11, 400-8511", |
|
"settlement": "Takeda, Kofu", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yoshimi", |
|
"middle": [], |
|
"last": "Suzukit", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Yamanashi University", |
|
"location": { |
|
"postCode": "4-3-11, 400-8511", |
|
"settlement": "Takeda, Kofu", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes unsupervised learning algorithm for disambiguating verbal word senses using term weight learning. In our method, collocations which characterise every sense are extracted using similarity-based estimation. For the results, term weight learning is performed. Parameters of term weighting are then estimated so as to maximise the collocations which characterise every sense and minimise the other collocations. The resuits of experiment demonstrate the effectiveness of the method.", |
|
"pdf_parse": { |
|
"paper_id": "E99-1028", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes unsupervised learning algorithm for disambiguating verbal word senses using term weight learning. In our method, collocations which characterise every sense are extracted using similarity-based estimation. For the results, term weight learning is performed. Parameters of term weighting are then estimated so as to maximise the collocations which characterise every sense and minimise the other collocations. The resuits of experiment demonstrate the effectiveness of the method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "One of the major approaches to disambiguate word senses is supervised learning (Gale et al., 1992) , (Yarowsky, 1992) , (Bruce and Janyce, 1994) , (Miller et al., 1994) , (Niwa and Nitta, 1994) , (Luk, 1995) , (Ng and Lee, 1996) , (Wilks and Stevenson, 1998 ). However, a major obstacle impedes the acquisition of lexical knowledge from corpora, i.e. the difficulties of manually sensetagging a training corpus, since this limits the applicability of many approaches to domains where this hard to acquire knowledge is already available. This paper describes unsupervised learning algorithm for disambiguating verbal word senses using term weight learning. In our approach, an overlapping clustering algorithm based on Mutual information-based (Mu) term weight learning between a verb and a noun is applied to a set of verbs. It is preferable that Mu is not low (Mu(x,y) _> 3) for a reliable statistical analysis (Church et al., 1991) . However, this suffers from the problem of data sparseness, i.e. the co-occurrences which are used to represent every distinct senses does not appear in the test data. To attack this problem, for a low Mu value, we distinguish between unobserved co-occurrences that are likely to occur in a new corpus and those that are not, by using similarity-based estimation between two cooccurrences of words. For the results, term weight learning is performed. Parameters of term weighting are then estimated so as to maximise the collocations which characterise every sense and minimise the other collocations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 98, |
|
"text": "(Gale et al., 1992)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 117, |
|
"text": "(Yarowsky, 1992)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 144, |
|
"text": "(Bruce and Janyce, 1994)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 168, |
|
"text": "(Miller et al., 1994)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 193, |
|
"text": "(Niwa and Nitta, 1994)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 207, |
|
"text": "(Luk, 1995)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 228, |
|
"text": "(Ng and Lee, 1996)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 257, |
|
"text": "(Wilks and Stevenson, 1998", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 861, |
|
"end": 869, |
|
"text": "(Mu(x,y)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 933, |
|
"text": "(Church et al., 1991)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the following sections, we first define a polysemy from the viewpoint of clustering, then describe how to extract collocations using similaritybased estimation. Next, we present a clustering method and a method for verbal word sense disambiguation using the result of clustering. Finally, we report on an experiment in order to show the effect of the method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most previous corpus-based WSD algorithms are based on the fact that semantically similar words appear in a similar context. Semantically similar verbs, for example, co-occur with the same nouns. The following sentences from the Wall Street Journal show polysemous usages of take.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polysemy in Context", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(sl) Coke has typically taken a minority stake in such ventures. (sl') Guber and pepers tried to buy a stake in mgm in 1988. s2That process of sorting out specifies is likely to take time. (s2') We spent a lot of time and money in building our group of stations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polysemy in Context", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let us consider a two-dimensional Euclidean space spanned by the two axes, each associated with stake and time, and in which take is assigned a vector whose value of the i-th dimension is the value of Mu between the verb and the noun assigned to the i-th axis. Take co-occurs with the two nouns, while buy and spend co-occur only with one of the two nouns. Therefore, the distances between take and these two verbs are large In order to capture the synonymy of take with the two verbs correctly, one has to decompose the vector assigned to take into two component vectors, takel and take2, each of which corresponds to one of the two distinct usages of take (in Figure 1 ). (we call them hypothetical verbs in the following). The decomposition of a vector into a set of its component vectors requires a proper decomposition of the context in which the word occurs. Furthermore, in a general situation, a polysemous verb co-occurs with a large group of nouns and one has to divide the group of nouns into a set of subgroups, each of which correctly characterises the context for a specific sense of the polysemous word. Therefore, the algorithm has to be able to determine when the context of a word should be divided and how.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 662, |
|
"end": 672, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Polysemy in Context", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The approach proposed in this paper explicitly introduces new entities, i.e. hypothetical verbs when an entity is judged polysemous and associates them with contexts which are sub-contexts of the context of the original entity\u2022 Our algorithm has two basic operations, splitting and lumping\u2022 Splitting means to divide a polysemous verb into two hypothetical verbs and lumping means to combine two hypothetical verbs to make one verb out of them (Fukumoto and Tsujii, 1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 444, |
|
"end": 471, |
|
"text": "(Fukumoto and Tsujii, 1994)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polysemy in Context", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a set of verbs, vl, v2,--., v,~, the algorithm produces a set of semantic clusters, which are ordered in the ascending order of their semantic deviation values\u2022 Semantic deviation is a measure of the deviation of the set in an n-dimensional Euclidean space, where n is the number of nouns which co-occur with the verbs\u2022 In our algorithm, if vi is non-polysemous, it belongs to at least one of the resultant semantic clusters. If it is polysemous, the algorithm splits it into several hypothetical verbs and each of them belongs to at least one of the clusters\u2022 Table 1 summarises the sample result from the set {close, open, end}. In Table 1 , subsets 'open' and 'end' correspond to the distinct senses of'close'. Mu(vi,n) is the value of mutual information between a verb and a noun. If a polysemous verb is followed by a noun which belongs to a set of the nouns, the meaning of the verb within the sentence can be determined accordingly, because a set of the nouns characterises one of the possible senses of the verb.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 567, |
|
"end": 574, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 640, |
|
"end": 647, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The basic assumption of our approach is that a polysemous verb could not be recognised correctly if collocations which represent every distinct senses of a polysemous verb were not weighted correctly. In particular, for a low Mu value, we have to distinguish between those unobserved co-occurrences that are likely to occur in a new corpus and those that are not. We extracted these collocations which represent every distinct senses of a polysemous verb using similarity-based estimation. Let (wv, nq) and (w~i , nq) be two different co-occurrence pairs. We say that wv and nq are semantically related if w~i and nq are semantically related and (wp, nq) and (w~i , nq) are semantically similar (Dagan et al., 1993) . Using the estimation, collocations are extracted and term weight learning is performed. Parameters of term weighting are then estimated so as to maximise the collocations which characterise every sense and minimise the other collocations. (v,n.) , (wp,n.) and (wl,n.) are set to/3 (0 < /3 < 1) end_if end_for end_for end Figure 2 , (a-l) is the procedure to extract collocations which were not weighted correctly and (a-2) and (b) are the procedures to extract other words which were not weighted correctly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 695, |
|
"end": 715, |
|
"text": "(Dagan et al., 1993)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 957, |
|
"end": 963, |
|
"text": "(v,n.)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 966, |
|
"end": 973, |
|
"text": "(wp,n.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1039, |
|
"end": 1047, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sim (vi, v~) in Figure 2 is the similarity value ofvl and v~ which is measured by the inner product of their normalised vectors, and is shown in formula (1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 12, |
|
"text": "(vi, v~)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 24, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "v i \u00d7 ~)~ vi = (v~:,..-,v~) (1) { Mu(vi,nj) ifMu(vi,nj) >_ 3 vii = 0 otherwise (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In formula (1), k is the number of nouns which co-occur with vi. vii is the Mu value between vl and nj. We recall that wp and nq are semantically related if w~i and nq are semantically related and (wv,n q) and (w'pi,nq) are semantically similar. (a) ' and nq are se-in Figure 2 , we represent wpi mantically related when Mu(w~i,nq) >__ 3. Also, (wv,nq) and (w'pi,nq) are semantically similar if t For wt, we can replace wp with wt, nq 6 N_Sett -N_Sets with nq E N_Set, -N.Sets, and Sim(wp, w'pl) > 0 with Sirn(wt, w'pi) > O.", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 352, |
|
"text": "(wv,nq)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 366, |
|
"text": "(w'pi,nq)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 277, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sim(wp, w~i ) > 0. In (a)of Figure 2 , for example, when (wp,nq) is judged to be a collocation which represents every distinct senses, we set Mu values of (wp,nq) and (v,nq) to a x Mu(wp,nq) and a x Mu(v,r%), 1 < a. On the other hand, when nq is judged not to be a collocation which represents every distinct senses, we set Mu values of these co-occurrence pairs to fl x Mu(wp,nq) and /3 x Mu(v,nq), 0 < j3 < 1 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 173, |
|
"text": "(v,nq)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 36, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extraction of Collocations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Clustering a Set of Verbs Given a set of verbs, VG = {vl, ---, vm}, the algorithm produces a set of semantic clusters, which are sorted in ascending order of their semantic de- Let v be an element included in both Seti and Set 3. To determine whether v has two senses wp, where wp is an element of Seti, and wl, where wl is an element of Set3, we make two clusters, as shown in (4) and their merged cluster, as shown in (5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "wp,---,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, v and wp are verbs and wl, \u2022 \u2022 -, w,~ are verbs or hypothetical verbs, wl, \"-', wp, -.-, w,~ in 5satisfy Dev(v, wi) < Dev(v,wj) (1 < i _< j < n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "vl and v2 in (4) are new hypothetical verbs which correspond to two distinct senses of v. If v is a polysemy, but is not recognised correctly, then Extraction-of-Collocations shown in Figure 2 is applied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 192, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Extraction-of-Collocations, for (4) and (5), a and /3 are estimated so as to satisfy (6) and (7). D,v(,.,,,~,,)_< O~v (,,~,,,-.-,~,,,,. -,,=n) (6) Dev (v2, w, , ..., w, ~) < Oev(v, w, , ..., wp, .., , w, ~) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 158, |
|
"text": "(v2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 161, |
|
"text": "w,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 163, |
|
"text": ",", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 168, |
|
"text": "...,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 171, |
|
"text": "w,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 183, |
|
"text": "~) < Oev(v,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 186, |
|
"text": "w,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 188, |
|
"text": ",", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 193, |
|
"text": "...,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 197, |
|
"text": "wp,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 201, |
|
"text": "..,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 203, |
|
"text": ",", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 206, |
|
"text": "w,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 209, |
|
"text": "~)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 138, |
|
"text": "(,,~,,,-.-,~,,,,.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The whole process is repeated until the newly obtained cluster, Setx, contains all the verbs in the input or the ICS is exhausted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "{vl, wp}, {v=, wl,---, (4) {v,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used the result of our clustering analysis, which consists of pairs of collocations of a distinct sense of a polysemous verb and a noun. Let v has senses vl, v2, \"--, v,~. The sense of a polysemous verb v is vi (1 < i < m) if t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Sense Disambiguation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Ej Mu(vi,ni) is largest among Ej Mu(vl,nj),", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 12, |
|
"text": "Mu(vi,ni)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "~-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 .. and Et~ Mu (v,~,nj) . Here, t is the number of nouns which co-occur with v within the five-word distance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 24, |
|
"text": "(v,~,nj)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "~-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This section describes an experiment conducted to evaluate the performance of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The data we have used is 1989 Wall Street Journal (WSJ) in ACL/DCI CD-ROM which consists of 2,878,688 occurrences of part-of-speech tagged words (Brill, 1992) . The inflected forms of the same nouns and verbs are treated as single units. For example, 'book' and 'books' are treated as single units. We obtained 5,940,193 word pairs in a window size of 5 words, 2,743,974 different word pairs. From these, we selected collocations of a verb and a noun.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 158, |
|
"text": "(Brill, 1992)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "As a test data, we used 40 sets of verbs. We selected at most four senses for each verb, the best sense, from among the set of the Collins dictionary and thesaurus (McLeod, 1987) , is determined by a human judge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 178, |
|
"text": "(McLeod, 1987)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The results of the experiment are shown in Table 2 , Table 3 and Table 4 . In Table 2 , 3 and 4, every polysemous verb has two, three and four senses, respectively. Column 1 in Table 2, 3 and 4 shows the test data. The verb v is a polysemous verb and the remains show these senses. For example, 'cause' of (1) in Table 2 has two senses, 'effect' and 'produce'. 'Sentence' shows the number of sentences of occurrences of a polysemous verb, and column 4 shows their distributions. 'v' shows the number of polysemous verbs in the data. W in Table 2 shows the number of nouns which co-occur with wp and wl. v n W shows the number of nouns which co-occur with both v and W. In a similar way, W in Table 3 and 4 shows the number of nouns which co-occur with wp ~ w2 and wp ~ w3, respectively. 'Correct' shows the performance of our method. 'Total' in the bottom of Table 4 shows the performance of 40 sets of verbs. Table 2 shows when polysemous verbs have two senses, the percentage attained at 80.0%. When polysemous verbs have three and four senses, the percentage was 77.7% and 76.4%, respectively. This shows that there is no striking difference among them. Column 8 and 9 in Table 2, 3 and 4 show the results of collocations which were extracted by our method. ---, Set.,,,,;-, Figure 3 : Flow of the algorithm Mu < 3 shows the number of nouns which satisfy Mu(wp,n) < 3 or Mu(wt,n) <3. 'Correct' shows the total number of collocations which could be estimated correctly. Table 2 ~ 4 show that the frequency of v is proportional to that of v M W. As a result, the larger the number of v M W is, the higher the percentage of correctness of collocations is.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 51, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 54, |
|
"end": 73, |
|
"text": "Table 3 and Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 86, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 194, |
|
"text": "Table 2, 3 and 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 322, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 547, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 694, |
|
"end": 702, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 862, |
|
"end": 869, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 913, |
|
"end": 920, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1178, |
|
"end": 1195, |
|
"text": "Table 2, 3 and 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1265, |
|
"end": 1281, |
|
"text": "---, Set.,,,,;-,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1282, |
|
"end": 1290, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1476, |
|
"end": 1483, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Related Work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unsupervised learning approaches, i.e. to determine the class membership of each object to be classified in a sample without using sensetagged training examples of correct classifications, is considered to have an advantage over supervised learning algorithms, as it does not require costly hand-tagged training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Schiitze and Zernik's methods avoid tagging each occurrence in the training corpus. Their methods associate each sense of a polysemous word with a set of its co-occurring words (Schutze, 1992) , (Zernik, 1991) . Ifa word has several senses, then the word is associated with several different sets of co-occurring words, each of which corresponds to one of the senses of the word. The weakness of Schiitze and Zernik's method, however, is that it solely relies on human intuition for identifying different senses of a word, i.e. the human editor has to determine, by her/his intuition, how many senses a word has, and then identify the sets of co-occurring words that correspond to the different senses. Yarowsky used an unsupervised learning procedure to perform noun WSD (Yarowsky, 1995) . This algorithm requires a small number of training examples to serve as a seed. The result shows that the average percentage attained was 96.1% for 12 nouns when the training data was a 460 million word corpus, although Yarowsky uses only nouns and does not discuss distinguishing more than two senses of a word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 192, |
|
"text": "(Schutze, 1992)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 209, |
|
"text": "(Zernik, 1991)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 772, |
|
"end": 788, |
|
"text": "(Yarowsky, 1995)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A more recent unsupervised approach is described in (Pedersen and Bruce, 1997) . They presented three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text, i.e. McQuitty's similarity analysis, Ward's minimum-variance method and the EM algorithm. These algorithms assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. Their methods are perhaps the most similar to our present work. They reported that disambiguating nouns is more successful rather than adjectives or verbs and the best result of verbs was McQuitty's method (71.8%), although they only tested 13 ambiguous words (of these, there are only 4 verbs). Furthermore, each has at most three senses. In future, we will compare our method with their methods using the data we used in our experiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 78, |
|
"text": "(Pedersen and Bruce, 1997)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "7", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this study, we proposed a method for disambiguating verbal word senses using term weight learning based on similarity-based estimation. The results showed that when polysemous verbs have two, three and four senses, the average percentage attained at 80.0%, 77.7% and 76.4%, respectively. Our method assumes that nouns which co-occur with a polysemous verb is disambiguated in advance. In future, we will extend our method to cope with this problem and also apply our method to not only a verb but also a noun and an adjective sense disambiguation to evaluate our method. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Using Wall Street Journal, we obtained 13 = 0.964 and 7 = -0.495.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank the reviewers for their valuable comments. This work was supported by the Grant-in-aid for the Japan Society for the Promotion of Science(JSPS).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A simple rule-based part of speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of the 3rd Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Brill. 1992. A simple rule-based part of speech tagger. In Proc. of the 3rd Conference on Ap- plied Natural Language Processing, pages 152- 155.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word-sense disambiguation using decomposable models", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bruce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Janyce", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the 32nd Annual Meeting", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--145", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Bruce and W. Janyce. 1994. Word-sense dis- ambiguation using decomposable models. In Proc. of the 32nd Annual Meeting, pages 139- 145.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Using statistics in lexical analysis", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hindte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Lezical acquisition: Ezploiting on-line resources to build a lezicon", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. W. Church, W. Gale, P. Hanks, and D. Hindte. 1991. Using statistics in lexical analysis. In Lezical acquisition: Ezploiting on-line resources to build a lezicon, pages 115-164. (Zernik Uri (ed.)), London, Lawrence Erlbaum Associates.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Contextual word similarity and estimation from sparse data", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Fernando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lilian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proc. of the 31th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dagan, P. Fernando, and L. Lilian. 1993. Con- textual word similarity and estimation from sparse data. In Proc. of the 31th Annual Meet- ing of the ACL, pages 164-171.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic recognition of verbal polysemy", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Fukumoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the 15th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "762--768", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Fukumoto and J. Tsujii. 1994. Automatic recognition of verbal polysemy. In Proc. of the 15th COLING, Kyoto, Japan, pages 762-768.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A method for disambiguating word senses in a large corpus", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computers and the Humanities", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "415--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. K. Gale, K. W. Church, and D. Yarowsky. 1992. A method for disambiguating word senses in a large corpus. In Computers and the Hu- manities, volume 26, pages 415-439.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Statistical sense disambiguation with relatively small corpora using dictionary definitions", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Luk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of the 335t Annual Meeting of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. K. Luk. 1995. Statistical sense disambiguation with relatively small corpora using dictionary definitions. In Proc. of the 335t Annual Meeting of ACL, pages 181-188.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The new collins dictionary and thesaurus in one volume", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mcleod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. T. McLeod. 1987. The new collins dictionary and thesaurus in one volume. London, Harper- Collins Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Using a semantic concordance for sense identification", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Shari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Claudia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Thomas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the ARPA Workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "240--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Miller, C. Martin, L. Shari, L. Claudia, and R. G. Thomas. 1994. Using a semantic concor- dance for sense identification. In Proc. of the ARPA Workshop on Human Language Technol- ogy, pages 240-243.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Integrating multiple knowledge sources to disambiguate word 2", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. of the 34th Annual Meeting of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. T. Ng and H. B. Lee. 1996. Integrating mul- tiple knowledge sources to disambiguate word 2,139 11636(76.4) [ 9,706[ [ [ 7,572(75.6) II I I sense: An examplar-based approach. In Proc. of the 34th Annual Meeting of ACL, pages 40- 47.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Co-occurrence vectors from corpora vs. distance vectors from dictionaries", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Niwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Nitta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of 15th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "304--309", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Niwa and Y. Nitta. 1994. Co-occurrence vec- tors from corpora vs. distance vectors from dic- tionaries. In Proc. of 15th COLING, Kyoto, Japan, pages 304-309.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Distinguishing word senses in untagged text", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bruce", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proc. of the 2nd Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "197--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T. Pedersen and R. Bruce. 1997. Distinguishing word senses in untagged text. In Proc. of the 2nd Conference on Empirical Methods in Natu- ral Language Processing, pages 197-207.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Dimensions of meaning", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of Supercomputing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "787--796", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Schutze. 1992. Dimensions of meaning. In Proc. of Supercomputing, pages 787-796.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Word sense disambiguation using optimised combinations of knowledge sources", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Wilks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proe. of the COLING-ACL'98", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1398--1402", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Wilks and M. Stevenson. 1998. Word sense dis- ambiguation using optimised combinations of knowledge sources. In Proe. of the COLING- ACL'98, pages 1398-1402.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Word sense disambiguation using statistical models of roget's categories trained on large corpora", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of the l$th COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "454--460", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Yarowsky. 1992. Word sense disambiguation using statistical models of roget's categories trained on large corpora. In Proc. of the l$th COLING, pages 454--460.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Unsupervised word sense disambiguation rivaling supervised methods", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proc. of the 33rd Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proc. of the 33rd Annual Meeting of the ACL, pages 189-196.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Trainl vs. train2: Tagging word senses in corpus", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Zernik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Lexical acquisition: Exploiting on-line resources to build a lexicon", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "U. Zernik. 1991. Trainl vs. train2: Tagging word senses in corpus. In Lexical acquisi- tion: Exploiting on-line resources to build a lex- icon, pages 91-112. Uri Zernik(Ed.), London, Lawrence Erlbaum Associates.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "The decomposition of the verb take", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Extraction of collocations is shown in Figure 2 t In", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "v{ and n i. ~ = ~-~i=lvij In the experiment, we set increment value of a and decrease value of/3 to 0.001. is the j-th value of the centre of gravity. [ 0 [ = vii) is the length of the centre of gravity. In formula (3), a set with a smaller value is considered semantically less deviant.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "shows the flow of the clustering algorithm. As shown in '(' inFigure 3, the function Make-Inltial-Cluster-Set applies to VG and produces all possible pairs of verbs with their semantic deviation values. The result is a list of pairs called the ICS (Initial Cluster Set). The CCS (Created Cluster Set) shows the clusters which have been created so far. The function Make-Temporary-Cluster-Set retrieves the clusters from the CCS which contain one of the verbs of Seti. The results (Set~3) are passed to the function Reeognition-of-Polysemy, which determines whether or not a verb is polysemous.", |
|
"num": null |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Vi</td><td>n</td><td>Mu(vi ,n)</td></tr><tr><td>closel</td><td>account</td><td>2.116</td></tr><tr><td>(open)</td><td>banking</td><td>2.026</td></tr><tr><td/><td>acquisition</td><td>1.072</td></tr><tr><td/><td>book</td><td>4.427</td></tr><tr><td/><td>bottle</td><td>3.650</td></tr><tr><td>close2</td><td>announcement</td><td>1.692</td></tr><tr><td>(end)</td><td>connection</td><td>2.745</td></tr><tr><td/><td>conversation</td><td>4.890</td></tr><tr><td/><td>period</td><td>1.876</td></tr><tr><td/><td>practice</td><td>2.564</td></tr></table>", |
|
"text": "Distinct senses of the verb 'close'" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>(a-l)</td><td>then parameters of Mu of(wp,nq) and (v,rtq) are set to a (1 < a)</td></tr><tr><td>(a-2)</td><td>else parameters of Mu of (wp,nq) and (V,nq) are set to ~ (0 </3 < 1)</td></tr><tr><td/><td>end_if</td></tr><tr><td/><td>end_for</td></tr><tr><td/><td>end_for</td></tr><tr><td colspan=\"2\">(b) for all n, E g_Set3 such that Mu(wp,rt,) >_ 3 and Mu(wt,n,) > 3</td></tr><tr><td/><td>t Extract wp~ (1 < i < t) such that Mu(w~, ~) > 3. Here, t is the number of verbs which</td></tr><tr><td/><td>co-occur with n,</td></tr><tr><td/><td>for all w~i</td></tr><tr><td/><td>if w;, exists such that Sirn(wp,w'pl ) > 0 and Sirn(wt,w;i ) > 0</td></tr><tr><td/><td>then parameters of Mu of</td></tr></table>", |
|
"text": "Let v be two senses, wp and wl, but not be judged correctly. Let N_Setl be a set of nouns which co-occur with both v and wp, but do not cooccur with wl. Let also N.Set2 be a set of nouns which co-occur with both v and wl, but do not Extract wpi (1 < i < s) such that Mu(w~i, nq) > 3. Here, s is the number of verbs which co-occur with nq for all w;i if w~i exists such that Sim(wp,w'pi ) > 0" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>(6)</td></tr><tr><td>[__</td></tr></table>", |
|
"text": "The result of disambiguation experiment(two senses)" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td>Sentence</td><td colspan=\"2\">w__w__w__w__w__w__~ v</td><td>v N HI</td><td>Correct(%)</td><td>Mu < 3</td><td>Correct(%)</td></tr><tr><td/><td>240</td><td>120(50.0)</td><td>447</td><td>432</td><td>180(75.0)</td><td>124</td><td>99(79.9)</td></tr><tr><td/><td/><td>21(9.0)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>199(41.0)</td><td/><td/><td/><td/><td/></tr><tr><td>{complete, end, develop, fill}</td><td>365</td><td>107(29.3)</td><td>727</td><td>450</td><td>280(76.7)</td><td>240</td><td>193(80.4)</td></tr><tr><td/><td/><td>242(66.3)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>16(4.4)</td><td/><td/><td/><td/><td/></tr><tr><td>{gain, win, get, increase}</td><td>334</td><td>47(14.0)</td><td>527</td><td>467</td><td>270(80.8)</td><td>187</td><td>152(81.4)</td></tr><tr><td/><td/><td>228(68.2) 59(17.8)</td><td/><td/><td/><td/><td/></tr><tr><td>{grow, increase, develop become}</td><td>310</td><td>68(21.9)</td><td>903</td><td>651</td><td>241(77.7)</td><td>372</td><td>305(82.0)</td></tr><tr><td>{operate, run, act, control}</td><td>232</td><td>132(42.5) 11o(35.6) 76(32.7)</td><td>812</td><td>651</td><td>187(80.6)</td><td>311</td><td>255(82.3)</td></tr><tr><td/><td/><td>83(35.7)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>73(31.6)</td><td/><td/><td/><td/><td/></tr><tr><td>{rise, increase, appear, grow}</td><td>276</td><td>51(18.4)</td><td>711</td><td>414</td><td>198(71.7)</td><td>372</td><td>294(79.1)</td></tr><tr><td/><td/><td>137(49.6) 88(32.0)</td><td/><td/><td/><td/><td/></tr><tr><td>{see, look, know, feel}</td><td>318</td><td>128(40.2)</td><td>1,785</td><td>934</td><td>263(82.7)</td><td>497</td><td>414(83.4)</td></tr><tr><td/><td/><td>162(50.9)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>28(8.9~</td><td/><td/><td/><td/><td/></tr><tr><td>{want, desire, search, lack}</td><td>267</td><td>66(24.7)</td><td>590</td><td>470</td><td>208(77.9)</td><td>198</td><td>159(80.8)</td></tr><tr><td/><td/><td>53t19.8) 148(55.5)</td><td/><td/><td/><td/><td/></tr><tr><td>{lead, cause, guide, precede}</td><td>183</td><td>139(75.9)</td><td>548</td><td>456</td><td>138(75.4)</td><td>274</td><td>221(80.9)</td></tr><tr><td/><td/><td>38(20.7) 6(3.4)</td><td/><td/><td/><td/><td/></tr><tr><td>{carry, bring, capture, behave}</td><td>186</td><td>142(76.3)</td><td>474</td><td>440</td><td>142(76.3)</td><td>207</td><td>167(80.7)</td></tr><tr><td/><td/><td>39(20.9) 5(2.8)</td><td/><td/><td/><td/><td/></tr><tr><td>Total (3 senses)</td><td>2,711</td><td>1,573(56.5)</td><td/><td/><td>2,107(77.7)</td><td/><td/></tr></table>", |
|
"text": "The result of disambiguation experiment(three senses) {catch, acquire, grab, watch}" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Num</td><td>{v, wp, wl, w~, wa}</td><td>Sentence</td><td>wp(%)</td><td>v</td><td>v N W</td><td>Correct(%)</td><td>Mu < 3</td><td>Correct(%)</td></tr><tr><td/><td/><td/><td>w~(%)</td><td/><td/><td/><td/><td/></tr><tr><td>(31)</td><td>{develop, create, grow, improve,</td><td>187</td><td>117(62.5)</td><td>922</td><td>597</td><td>155(82.8)</td><td>253</td><td>218(86.1)</td></tr><tr><td/><td>expand}</td><td/><td>34118.1 ) 412.1)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>32(17.3)</td><td/><td/><td/><td/><td/></tr><tr><td>(32)</td><td>{face, confront, cover, lie, turn}</td><td>222</td><td>54(24.3)</td><td>859</td><td>567</td><td>184(82.8)</td><td>178</td><td>154(86.5)</td></tr><tr><td/><td/><td/><td>103(46.3)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>12(s.4)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>53(24.0}</td><td/><td/><td/><td/><td/></tr><tr><td>(33)</td><td>{get, become, lose, understand, catch}</td><td>302</td><td>88(29.1) 98(~2.4)</td><td>762</td><td>513</td><td>229(75.8)</td><td>424</td><td>365(86.2)</td></tr><tr><td/><td/><td/><td>34(11.21 82(27.3)</td><td/><td/><td/><td/><td/></tr><tr><td>(34)</td><td>{go, come, become, run, fit}</td><td>217</td><td>101(46.5)</td><td>732</td><td>435</td><td>145(66.8)</td><td>374</td><td>302(80.9)</td></tr><tr><td/><td/><td/><td>66(30.4)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>36(16.5)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>14(6.6)</td><td/><td/><td/><td/><td/></tr><tr><td>(35)</td><td>{make, create, do, get, behave}</td><td>227</td><td>123(54.1)</td><td>783</td><td>555</td><td>178(78.4)</td><td>435</td><td>370(85.2)</td></tr><tr><td/><td/><td/><td>28(12.3)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>58(25.5) 18(8.1)</td><td/><td/><td/><td/><td/></tr><tr><td>(36)</td><td>{show, appear, inform, prove,</td><td>227</td><td>121(53.3)</td><td>996</td><td>560</td><td>181(79.7)</td><td>258</td><td>214(83.2)</td></tr><tr><td/><td>expi'ess}</td><td/><td>16(7.0)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>40(17.6)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>50(22.1)</td><td/><td/><td/><td/><td/></tr><tr><td>(37)</td><td>{take, buy, obtain, spend, bring}</td><td>246</td><td>20(8.1) 123(5o.o) 42(17.o}</td><td>2,742</td><td>1,244</td><td>i79(72.7)</td><td>829</td><td>677(81.6)</td></tr><tr><td/><td/><td/><td>6i(24.9)</td><td/><td/><td/><td/><td/></tr><tr><td>(as)</td><td>{hold, keep, carry, reserve,</td><td>145</td><td>7(4.81</td><td>727</td><td>459</td><td>111(76.5)</td><td>394</td><td>300(76.2)</td></tr><tr><td/><td>accept }</td><td/><td>53(36.5)</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>2(1.5) 83(57.2)</td><td/><td/><td/><td/><td/></tr><tr><td>(39)</td><td>{raise, lift, increase, create,</td><td>204</td><td>2(1.1)</td><td>746</td><td>491</td><td>151(74.0)</td><td>341</td><td>272(79.7)</td></tr><tr><td/><td>Collect}</td><td/><td>81(39.7}</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td>8614~.1 } 35(17.1)</td><td/><td/><td/><td/><td/></tr><tr><td>(40)</td><td>{draw, attract, pull, close, write}</td><td>162</td><td>78(48.1) ~8(17.4) 43(26.5) 13(8.o)</td><td>798</td><td>533</td><td>123(75.9)</td><td>143</td><td>119(83.2)</td></tr><tr><td/><td>Total (4 senses)</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>I</td><td>Tot al</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "The result of disambiguation experiment(four senses)" |
|
} |
|
} |
|
} |
|
} |