Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:26.365576Z"
},
"title": "The Sprakdata-ML System as Used for SENSEV AL-2",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Kokkinakis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Goteborg University",
"location": {
"postBox": "Box 200",
"postCode": "SE-405 30",
"settlement": "Goteborg",
"country": "\u2022Sweden"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the Sprakdata-ML system as used in the SENSEV AL-2 exercise. The main focus of the paper is devoted to the process of feature extraction, preparation and organization of the test and training data.",
"pdf_parse": {
"paper_id": "S01-1022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the Sprakdata-ML system as used in the SENSEV AL-2 exercise. The main focus of the paper is devoted to the process of feature extraction, preparation and organization of the test and training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The methodology followed for sense disambiguation of the Swedish data by the Sprakdata-ML system is supervised, based on Machine Learning (ML) techniques, particularly Memory Based Learning (MBL). The MBL implementation we used originates from the university of Tilburg in a system called TiMBL; details can be found in Daelemans et al. ( 1999) . Thus, our main contribution in this task has been the effort to try and isolate a set of features that could maximize the performance of the MBL software. However, it is rather difficult to give the exact number of features and examples required for an adequate description of a word's sense or which algorithm performs best. We think that there is space for improvement of our system's performance by better modeling of the available resources (e.g. context, annotations), choice of parameters and algorithms, a claim that we have not explored to its full potential, further exploration is required. Intelligent example selection for supervised learning is an important issue in ML, an issue that we have not fully explored. In previous experiments for a similar problem for Swedish, the algorithm that performed best in TiMBL was a variant of the knearest neighbor (Mitchell, 1997) called IB 1, an algorithm that we also used in the exercise; (Kokkinakis & Johansson Kokkinakis, 1999) .",
"cite_spans": [
{
"start": 320,
"end": 344,
"text": "Daelemans et al. ( 1999)",
"ref_id": "BIBREF0"
},
{
"start": 1214,
"end": 1230,
"text": "(Mitchell, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 1292,
"end": 1333,
"text": "(Kokkinakis & Johansson Kokkinakis, 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": null
},
{
"text": "Data Preparation (Train)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": "91"
},
{
"text": "To enhance the lexical disambiguation results using the available resources, we perform preprocessing in both the dictionary and the text to be sense-disambiguated. This is motivated by the fact that by making certain normalizations and simplifications in the resources we (hopefully) contribute to the production of qualitatively better results. Initially, a text to be disambiguated is preprocessed by a tokeniser, a sentence boundary identifier, an idiom 1 and multiword identifier, a Name-Entity recogniser 2 , a part-of-speech tagger, a lemmatiser and a semantic tagger3. Then, the input texts are transformed to the specified format that the MBL requires, which is feature-vectors of a specific length and content. The vectors we use consist of 102 features, the last two being the id-number and class or sense assigned to the vector. Since we do not know in advance which features will be useful for each particular word and sense, we chose to include features from a number of different information sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1",
"sec_num": "91"
},
{
"text": "The vectors consisted of: (i) selected information gathered from the dictionary entries (5 features); (ii) near-context (5 features); (iii) annotations applied on the training corpus ( 5 1The idioms originate from the Gothenburg Lexical Data Base/Semantic Database (GLDB/SDB) (http://spraakdata.gu.se/lb/gldb.html) and were used for the recognition and marking of idioms in the test/training corpus (over 4,000 idioms). features); and (iv) information acquired from the lemmatised training corpus (85 features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "The corpus instances and dictionary were in XML format. An example of a corpus instance (1) for the first sense of the noun barn 'child' and a fragment of its dictionary description (2) are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "( 1) <instance id=\"barn.114\"><answer instance=\"barn.114\" senseid=\"barn_1_1\" I> <context> ... forsoken sa att spiidbarnen sjalva kunde styra de retningar som de utsattes for under forsoket. !nom sprakforskningen betyder det att <head>barnen<lhead> kan paverka hur olika talljud presenteras. Nar de far ... </context> </instance> 2<lemma-entry id=\"barn_1\" form= \"barn\" pos=\"n\" inflection=\"-et =\"><lexeme id= \"barn_1_1\"><definition> manniska som ej vuxit fardigk/definition> <definition-ext>till kropp och sjal; under ngn aldersgrans som beror pa samman-hangek/definition-ext> <synt-example>kvinnor och -slapptes fria <lsynt-example><synt-example>-under 6 ar kommer in gratis</synt-example><Compound>spadbarn</compoun d> ... <cyc/e id=\" barn_1_1_a\"><trans>spec. om manniska som ej natt pubertetsalder, straff -myndighetsalder etc. </trans><synt-example> annu nagot ar ar hon ett -</synt-example><compound> barnarbete </compound><compound>barnavardsnamn d</compound><l cycle> ... <llexeme><lexe me> ... <cycle id=\" barn_1_2_a\"> <trans> av. utvidgat, spec. om foster </trans><synt-example>hon ar med <lsynt-example><valency>med </valency> <I cycle> ... <llexeme><l lemma-entry>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "2.1 Vector Creation (Dictionary)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "The modeling of the vectors was performed in stages. The first stage of the processing uses the information from the dictionary. For every sense and sub-sense we extracted five representative nouns from the definition (and the definition extension) by applying part-of-speech tagging, lemmatization and exclusion of a number of generic nouns from a stop-list e.g. manniska 'human' (a). If the number of nouns were less than five, we completed the list with compounds (if available).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "Furthermore, the syntactic examples were used as training corpus and were added to the training instances (b). The valency information (if any) was also used in the same way (c). Consequently the amount of training material increased with 1,296 \"new\" disambiguated instances. A \"dummy\" XXX instance-number was given in these cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "We did not put much effort on a more complex processing of the definitions since these are very short. The representations given below use the dictionary and corpus sample provided in (1) and (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation",
"sec_num": "2"
},
{
"text": "<definition>manniska som ej vuxit fardigt<ldefinition><definition-ext>till kropp och sjal; under ngn aldersgrans som beror pa sammanhanget <!definition-ext> become: barn_1_1: kropp, sjal, aldersgrans (b) <synt-example>kvinnor och -slapptes fria<lsynt-example> become:",
"cite_spans": [
{
"start": 200,
"end": 203,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "<instance id=\"barn.XXX\"> <answer instance=\"barn.XXX\" senseid= \"barn_1_1 \"/> <context> kvinnor och <head>barn<lhead> slapptes fria <lcontext><linstance> (c) <valency>med -<!valency> become:",
"cite_spans": [
{
"start": 152,
"end": 155,
"text": "(c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "<instance id=\"barn.XXX\"> <answer instance=\"barn.XXX\" senseid= \"barn_1_2_a\"/><context> med <head>barn </head> <lcontext><linstance>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "2.2 Vector Creation (Near Context)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "The second stage involved the use of the nearcontext. Punctuation, auxiliary verbs and a number of other stop-words were removed and the surrounding tokens (\u00b12) of each headword in the corpus were extracted (d). Only the lemma form of the headwords was used, and the context was not lemmatized:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "(d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "<instance id=\"barn.114\"><answer instance=\"barn.114\" senseid= \"barn_1_1\" !><context>... sprakforskningen betyder \u20aclet att <head>barneR-</head> kaf! paverka l1tlf olika ... </context> <!instance> became: <instance id=\"barn.114\"> <answer instance=\"barn.114\" senseid=\" barn_1_1 \"l><context>sprakforskningen betyder <head>barn<lhead> paverka olika <context><linstance>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(a)",
"sec_num": null
},
{
"text": "During the third stage, the training corpus was processed by a n(\\me-entity recognizer (e.g. HUMAN, TIME), an idiom identifier (IDIOM) and a semantic tagger (e.g. BIO, ETHNOS, PHENOMENON). The annotations produced by these tools were gathered in the form of a list of labels, and the five most frequent in the respective set of instances for each sense and sub-sense were used in the vectors. For example, for the sense barn_1_1 the five most frequent annotations found in all training instances were: BIO, ORGANIZATION-AGENCY, LOCATION, SITU and OCCUPATION-AGENT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Features)",
"sec_num": "2.3"
},
{
"text": "Often, near-context cannot distinguish between different senses. In such cases it is useful to look at a larger context and extract keywords representative for each sense. We made a frequency list of all noun and verb occurrences for all corpus instances for each sense. From the produced lists, 85 keywords per sense were extracted by eliminating high frequency (a word occurred in more than X percent of the cases with the sense) and low frequency words (a word occurred at least Z times in the list). For the sense barn_1_1 the 85 keywords included:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "ansikte, ansvar, apparatur, arm, awikelse, barnmorska, barnomsorg, beredskap, betala, bild, detalj, dialog, djur, docka, erfarenhet, tel, f6restallning, f6rslag, ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "After the collection and combination of the 95 features common to a sense (stages i, iii, iv in Section 2, e1), a complete case for a sense was produced (e2): ansikte, ansvar, apparatur, arm, avvikelse, barnmorska, barnomsorg, ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "We assume then, that .for each training instance the above list is \"true\" and we convert the training instances into vectors of 102 features, where the 95 positions of the features in each 93 vector were substituted with '1' keeping intact the near context. Thus, the truncated training instance in (f) was re-formatted to (g):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "(f)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "<instance id=\"barn.114\"><answer instance=\"barn.114\" senseid= \"barn_1_1\" l><context>sprakforskningen betyder <head>barn<lhead> paverka olika <context><linstance> (g) sprlliorskningen, betyder, <head> bam <!head>, paverka, olika, 1, 1, 1, 1, 1, 1, 1, . .. , bam.l14, bam_l_l.",
"cite_spans": [
{
"start": 161,
"end": 164,
"text": "(g)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 212,
"end": 251,
"text": "paverka, olika, 1, 1, 1, 1, 1, 1, 1, .",
"ref_id": null
}
],
"eq_spans": [],
"section": "Vector Creation (Global Context)",
"sec_num": "2.4"
},
{
"text": "The test material consisted of 1,525 corpus instances in the same format as the previous training example, but without any designation of the correct senseid. The material was processed in a similar manner as the training one. The major difference lies in the fact that at the vector-creation stage we used the feature-vectors representative for a sense, example (e) previously, and we compared them with the features produced for each test instance. A feature at a specific position then was assigned '1' if the feature in the test occurred in the representative feature vector or '0' otherwise. For instance, the test instance in (h) was transformed, after processing, to a 102-featurevector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "(h) <instance id=\"barn.114\"><answer instance=\"barn.114\" senseid= \"??????\" l><context>l jungfrukammaren innanf6r k6ket bodde en kokerska och en husa. [ Ett hus fyllt av minnen ] Huset ar fyllt av minnen. I fotoalbumen kan vi se farmor omgiven av sina sma vitkladda <head>barn<lhead> och pappa i sj6manskostym lutad mot en bj6rk. I farfars svarta, snidade skrivbord </context> <!instance>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "The class of the representative sense-vector that produced more '1 's for the test instance was chosen as the class of that instance. In (i) there are four '1 's which means that the specific test instance had four common features with the representative vector for sense barn_1_2_a, and less than four for all the other representative vectors for the rest of the senses for barn. Thus, the class for the test instance is assigned that sense (which may be altered by the MBL software during the nearest-neighbor calculation). Thus, the test instance in (h) was transformed to the format illustrated in (i). The four '1 's denote that there were four features in common with the representative vector for barn_1_2_a, the rest of the representative sensevectors for barn (e.g. barn_1_1_a, barn_1_1_b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "etc.) had less common features than four, and so barn_1_2_a was chosen: vitkladda, <head>barn</head>, pappa, i,O, 0, 0,0,0, 1, 0, 0, 0,0,0, 1, 1, 1, 0,0,0, 0,0, 0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0,0, 0,0,0,0, 0,0,0,0,0, 0,0,0, 0,0, 0,0, 0,0,0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0,0, 0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, barn.114, barn_1_2_a The training and test feature vectors were then fed to the TiMBL software, where the IB 1 algorithm (nearest neighbor search) was used.",
"cite_spans": [
{
"start": 72,
"end": 367,
"text": "vitkladda, <head>barn</head>, pappa, i,O, 0, 0,0,0, 1, 0, 0, 0,0,0, 1, 1, 1, 0,0,0, 0,0, 0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0,0, 0,0,0,0, 0,0,0,0,0, 0,0,0, 0,0, 0,0, 0,0,0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0,0, 0,0,0,0, 0, 0,0,0,0, 0,0,0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, barn.114, barn_1_2_a",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "(i) sma,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "4 Results Table 1 shows the evaluation of the test material. Since answers were provided for the whole material, precision and recall obtain the same value. Coarse-grain evaluation was not used, however coarse-grained is considered the least interesting of the three measures. The existence of sense ambiguity (polysemy and homonymy) is one of the major problems affecting the usefulness of basic corpus exploration tools. In this respect, we regard sense disambiguation as a very important process and component when it is seen in the context of a wider and deeper text-processing architecture. In this paper we have described a simple feature-vector extraction approach to sense disambiguation that was utilized in a MBL software. We do not believe that we have fully 94 exploited the capabilities of either the software or the way we can model the available resources. These issues will be investigated in the future, as well as the evaluation of the sense-tagger on an even larger scale.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Preparation (Test)",
"sec_num": "3"
},
{
"text": "See http://spraakdata.gu.se/svedk/ne.html for a demo.3 The semantic tagger originates from work byKokkinakis et al. (2000) and uses the SIMPLE semantic classes for annotation (only nouns).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "TiMBL: Tilburg Memory Based Learner, version 2.0, Reference Guide",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Van Der Sloat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den Bosch",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daelemans W., Zavrel J., van der Sloat K. and van den Bosch A. (1999). TiMBL: Tilburg Memory Based Learner, version 2.0, Reference Guide. ILK Technical Report 99-01, Paper available from: http://ilk.kub.nl!-ilk!papers/ilk9901.ps.gz.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Sense Tagging at the Cycle-Level Using GLDB",
"authors": [],
"year": null,
"venue": "Nordiska Studier i Lexikografi",
"volume": "27",
"issue": "",
"pages": "146--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sense Tagging at the Cycle-Level Using GLDB. Nordiska Studier i Lexikografi, vol. 27:146-167.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nordiska Foreningen fOr Lexikografi & Meijerbergs Institut for Svensk Etymologisk Forskning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gellerstam",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Johannessen",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ralph",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Rogstrom",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gellerstam M., Johannessen K., Ralph B. and Rogstrom L. (eds). Nordiska Foreningen fOr Lexikografi & Meijerbergs Institut for Svensk Etymologisk Forskning.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annotating, Disambiguating & Automatically Extending the Coverage of the Swedish SIMPLE Lexicon",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kokkinakis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Toporowska Gronostaj",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Warmenius",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2nd Languages Resources and Evaluation Conference ( LREC)",
"volume": "III",
"issue": "",
"pages": "1397--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kokkinakis D., Toporowska Gronostaj M. and Warmenius K. (2000). Annotating, Disambiguating & Automatically Extending the Coverage of the Swedish SIMPLE Lexicon. Proceedings of the 2nd Languages Resources and Evaluation Conference ( LREC), vol. III: 1397-1404. Athens, Hellas.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Machine Learning. McGraw-Hill Series on Computer Science",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell T. M. (1997). Machine Learning. McGraw- Hill Series on Computer Science.",
"links": null
}
},
"ref_entries": {}
}
}