ACL-OCL / Base_JSON /prefixL /json /lt4hala /2020.lt4hala-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:45.549038Z"
},
"title": "Automatic Semantic Role Labeling in Ancient Greek Using Distributional Semantic Modeling",
"authors": [
{
"first": "Alek",
"middle": [],
"last": "Keersmaekers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KU Leuven/Research Foundation -Flanders Blijde Inkomststraat 21",
"location": {
"postCode": "3000",
"settlement": "Leuven"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a first attempt to automatic semantic role labeling in Ancient Greek, using a supervised machine learning approach. A Random Forest classifier is trained on a small semantically annotated corpus of Ancient Greek, annotated with a large amount of linguistic features, including form of the construction, morphology, part-of-speech, lemmas, animacy, syntax and distributional vectors of Greek words. These vectors turned out to be more important in the model than any other features, likely because they are well suited to handle a low amount of training examples. Overall labeling accuracy was 0.757, with large differences with respect to the specific role that was labeled and with respect to text genre. Some ways to further improve these results include expanding the amount of training examples, improving the quality of the distributional vectors and increasing the consistency of the syntactic annotation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a first attempt to automatic semantic role labeling in Ancient Greek, using a supervised machine learning approach. A Random Forest classifier is trained on a small semantically annotated corpus of Ancient Greek, annotated with a large amount of linguistic features, including form of the construction, morphology, part-of-speech, lemmas, animacy, syntax and distributional vectors of Greek words. These vectors turned out to be more important in the model than any other features, likely because they are well suited to handle a low amount of training examples. Overall labeling accuracy was 0.757, with large differences with respect to the specific role that was labeled and with respect to text genre. Some ways to further improve these results include expanding the amount of training examples, improving the quality of the distributional vectors and increasing the consistency of the syntactic annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last couple of years there has been a large wave of projects aiming to make the large and diachronically diverse corpus of Ancient Greek linguistically searchable. Some large treebanking projects include the Ancient Greek Dependency Treebanks (Bamman, Mambrini, and Crane, 2009) , the PROIEL Treebank (Haug and J\u00f8hndal, 2008) , the Gorman Trees (Gorman, 2019) and the Pedalion Treebanks (Keersmaekers et al., 2019) . Altogether (also including some smaller projects) the Greek treebank material already contains more than 1.3 million tokensand it is still growingoffering a solid basis for corpuslinguistic research. There have also been recent efforts to automatically annotate an even larger body of text using natural language processing techniques: see Celano (2017) and Vatri and McGillivray (2018) for the literary corpus and Keersmaekers (2019) for the papyrus corpus. However, despite this large amount of morphologically and syntactically annotated data, semantic annotation for Ancient Greek is far more limited. A label such as \"ADV\" (adverbial) in the Ancient Greek Dependency Treebanks, for instance, refers to a large category of adverbials that do not necessarily have much in common: e.g. expressions of time, manner, place, cause, goal, and so on. While there have been some smaller scale initiatives for semantic role annotation in Greek, these only amount to about 12500 tokens (see section 2). This can be explained by the fact that manual annotation is a time-intensive task. Therefore this paper will present a first attempt to automatic semantic role labeling in Ancient Greek, using a supervised machine learning approach. This paper is structured as follows: after introducing the data used for this project (section 2), section 3 will describe the methodology. Section 4 will give a detailed overview and analysis of the results, which are summarized in section 5.",
"cite_spans": [
{
"start": 250,
"end": 285,
"text": "(Bamman, Mambrini, and Crane, 2009)",
"ref_id": "BIBREF1"
},
{
"start": 308,
"end": 332,
"text": "(Haug and J\u00f8hndal, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 352,
"end": 366,
"text": "(Gorman, 2019)",
"ref_id": "BIBREF27"
},
{
"start": 394,
"end": 421,
"text": "(Keersmaekers et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 764,
"end": 777,
"text": "Celano (2017)",
"ref_id": "BIBREF26"
},
{
"start": 782,
"end": 810,
"text": "Vatri and McGillivray (2018)",
"ref_id": "BIBREF23"
},
{
"start": 839,
"end": 858,
"text": "Keersmaekers (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Devising a definite list of semantic roles for Ancient Greek is not a trivial task. Looking at semantic annotation projects of modern languages, we can also see a wild amount of variation in the number of roles that are annotated, ranging from the 24 roles of VerbNet (Kipper Schuler, 2005) to the more than 2500 roles of FrameNet (Baker, Fillmore, and Lowe, 1998) . Obviously learning 2500 semantic roles is not feasible in a machine learning context (and even the 39 roles in the Ancient Greek Dependency Treebanks are a little on the high side considering the amount of training data we have, see below). Therefore I decided to make use of the roles of the Pedalion project (Van Hal and Ann\u00e9, 2017). These are based on semantic roles that are commonly distinguished both in cross-linguistic typological frameworks and in the Greek linguistic tradition (in particular Crespo, Conti, and Maquieira 2003, although their list is more fine-grained). The 29 Pedalion roles I used for this project (see table 1) are a reasonable enough amount to be automatically learned through machine learning, and they are also specifically relevant for Ancient Greek, in the sense that no role of this list is expressed by the exact same set of formal means as any other role: e.g. while both an instrument and a cause can be expressed with the dative in Greek, a cause can also be expressed by the preposition \u1f15\u03bd\u03b5\u03ba\u03b1 (h\u00e9neka: \"because of\") with the genitive while an instrument cannot. For this task I limited myself to nouns and other nominalized constructions, prepositional groups and adverbs, depending on a verb. I excluded a number of constructions from the data (on a rule-based basis), either due to a lack of semantic annotation in the data I used (see below) or because they did not express any of the semantic roles listed in table 4 (e.g. appositions): nominatives, vocatives, accusatives when used as an object, infinitive and participial clauses (they are still included when nominalized with an article, see e.g. sentence 1 below), and words with a syntactic relation other than ADV (adverbial), OBJ (complement) or PNOM (predicate nominal). 1 ADV is used for optional modifiers (e.g. \"Yesterday I gave him a book\"), while OBJ is used for obligatory arguments of non-copula verbs (e.g. \"Yesterday I gave him a book\") and PNOM for obligatory arguments of copula verbs (e.g. \"I was in Rome\").",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Kipper Schuler, 2005)",
"ref_id": "BIBREF15"
},
{
"start": 331,
"end": 364,
"text": "(Baker, Fillmore, and Lowe, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 870,
"end": 904,
"text": "Crespo, Conti, and Maquieira 2003,",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The data",
"sec_num": "2."
},
{
"text": "I took semantically annotated data from the following sources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The data",
"sec_num": "2."
},
{
"text": "(1) The Ancient Greek Dependency Treebanks (AGDT) (Bamman, Mambrini, and Crane 2009) , which has semantic data from the Bibliotheca of Pseudo-Apollodorus, Aesop's Fables and the Homeric Hymn to Demeter (1119 semantically annotated tokens in total). 2 The annotation scheme is described in Celano and Crane (2015) : since it was more finegrained (39 unique roles) than the one this project uses, some of their categories needed to be reduced (e.g. \"relation\", \"connection\", \"respect\" and \"topic\" to \"respect\"). Additionally, there are two other projects that are not included in the AGDT but use the same annotation scheme: a treebank of Apthonius' Progymnasmata (Yordanova, 2018, 752 , so I manually checked their data and disambiguated some roles (in particular \"extent\", \"orientation\" and \"indirect object\"). Syntactically its annotation scheme does not make a distinction between obligatory (OBJ) and non-obligatory (ADV) modifiers, so they were also disambiguated manually. (3) The Pedalion Treebanks (Keersmaekers et al., 2019) , annotated by a group of people involved at the University of Leuven in the annotation scheme described in this paper (syntactically, they are annotated in the same way as the AGDT). This is the largest amount of data this project uses (9446 semantically annotated tokens, or 76% of the total) and contains a wide range of classical and post-classical authors. In total this data includes 12496 tokens of 29 roles, as described in table 4 at the end of this paper.",
"cite_spans": [
{
"start": 50,
"end": 84,
"text": "(Bamman, Mambrini, and Crane 2009)",
"ref_id": "BIBREF1"
},
{
"start": 249,
"end": 250,
"text": "2",
"ref_id": null
},
{
"start": 289,
"end": 312,
"text": "Celano and Crane (2015)",
"ref_id": "BIBREF4"
},
{
"start": 662,
"end": 683,
"text": "(Yordanova, 2018, 752",
"ref_id": null
},
{
"start": 1005,
"end": 1032,
"text": "(Keersmaekers et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The data",
"sec_num": "2."
},
{
"text": "Next, I used this dataset of 12496 annotated roles as training data for a supervised machine learning system. Traditionally, automated approaches typically make use of formal features such as part-of-speech tags and morphology, syntactic labels, lemmas and sometimes encyclopedic knowledge such as lists of named entities (see e.g. Gildea and Jurafsky, 2002; M\u00e0rquez et al., 2008; Palmer, Gildea, and Xue, 2010) , essentially excluding semantic information. This seems counter-intuitive, but was necessary at the time due to a lack of good methods to represent lexical semantics computationally. Recently, however, due to the rise of so-called distributional semantic models (or \"vector space models\") and word embeddings, it has become possible to computationally represent the meaning of a word as a vector, with words that are similar in meaning also having mathematically similar vectors. This methodology has been highly successful for several natural language processing tasks, including semantic role labeling (e.g. Zhou and Xu, 2015; He et al., 2017; Marcheggiani and Titov, 2017) . Therefore one of the crucial features used for this task was a distributional vector of both the verb and the argument that bears the semantic relationship to the verb. The method of computing these distributional vectors is explained in more detail in Keersmaekers and Speelman (to be submitted). In short, they are calculated by computing association values (with the PPMI \"positive pointwise mutual information\" measure) of a given target lemma with its context elements, based on a large (37 million tokens) automatically parsed corpus of Ancient Greek (see Turney and Pantel, 2010 for a more detailed explanation of this methodology). These context elements are lemmas with which the target lemma has a dependency relationship (either its head or its child). 3 Next, these vectors are smoothed and their dimensionality is reduced by a technique called latent semantic analysis (LSA). This technique (using so-called Singular Value Decomposition) enables us to retrieve vectors with a lower dimensionality, where the individual elements do not directly correspond to individual contexts but the 'latent meaning' 4 contained in several context elements (see Deerwester et al., 1990 for more detail). Experimentally I found that reducing the vector to only 50 latent dimensions was sufficient for this task, with no significant improvements by increasing the number of dimensions. 5 Apart from the distributional vector of both the verb and its argument, the following additional features were included: \uf0b7 The form of the construction, subdivided into three features: the preposition (or lack thereof), the case form of its dependent word and a feature that combines both; e.g. for \u1f00\u03c0\u03cc+genitive (ap\u00f3: \"from\") these features would be {\u1f00\u03c0\u03cc,genitive,\u1f00\u03c0\u03cc+genitive}. Combinations that did occur less than 10 times were set to \"OTHER\" (179 in total). \uf0b7 The lemma of both the verb and its argument. For verbs or arguments that occurred less than 50 times, the value of this feature was set to \"OTHER\". Only 26 argument lemmas and 25 verb lemmas occurred more than 50 times; however, altogether these lemmas account for 34% of all tokens for the arguments and 34% of all tokens for the verbs as well.",
"cite_spans": [
{
"start": 332,
"end": 358,
"text": "Gildea and Jurafsky, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 359,
"end": 380,
"text": "M\u00e0rquez et al., 2008;",
"ref_id": "BIBREF17"
},
{
"start": 381,
"end": 411,
"text": "Palmer, Gildea, and Xue, 2010)",
"ref_id": "BIBREF18"
},
{
"start": 1023,
"end": 1041,
"text": "Zhou and Xu, 2015;",
"ref_id": "BIBREF24"
},
{
"start": 1042,
"end": 1058,
"text": "He et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 1059,
"end": 1088,
"text": "Marcheggiani and Titov, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 2252,
"end": 2275,
"text": "Deerwester et al., 1990",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "\u1f10\u03be\u03ad\u03c1\u03c7\u03bf\u03bc\u03b1\u03b9 (ex\u00e9rkhomai) and \u1f00\u03c0\u03ad\u03c1\u03c7\u03bf\u03bc\u03b1\u03b9 (ap\u00e9rkhomai) \"go away\" would typically be used with similar nouns. These \"latent meanings\" can therefore be seen as generalizations over several correlated features. 5 I used the function svds from the R package RSpectra (Qiu et al., 2019) .",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "(Qiu et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "\uf0b7 The syntactic relation between verb and argument, which was either \"OBJ\" (complement), \"ADV\" (adverbial) or \"PNOM\" (predicate nominal). \uf0b7 Animacy data, taken from an animacy lexicon coming from several sources: the PROIEL project (Haug and J\u00f8hndal, 2008) as well as data annotated at the University of Leuven (see Keersmaekers and Speelman, to be submitted). It categorizes nouns into the following groups: animal, concrete object, nonconcrete object, group, person, place and time. For 5249 (42%) arguments a label from this category could be assigned; the others were set to \"unknown\". \uf0b7 The part-of-speech of the argument to the verb:",
"cite_spans": [
{
"start": 232,
"end": 256,
"text": "(Haug and J\u00f8hndal, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 316,
"end": 332,
"text": "Keersmaekers and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "adjective, article, demonstrative pronoun, indefinite pronoun, infinitive, interrogative pronoun, noun, numeral, participle, personal pronoun and relative pronoun. \uf0b7 Morphological features of the argument and of the verb: gender and number for the argument and number, tense, mood and voice for the verb. I trained a Random Forest classifier on this data, using R (R Core Team 2019) package randomForest (Breiman et al., 2018) , building 500 classification trees 6this classifier turned out to perform better than any other machine learning model I tested. The results were evaluated using 10-fold cross-validation (i.e. by dividing the data in 10 roughly equally sized parts as test data, and training 10 models on each of the other 9/10 of the data).",
"cite_spans": [
{
"start": 404,
"end": 426,
"text": "(Breiman et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "Overall labeling accuracy was 0.757, or 9460/12496 roles correctly labeled. 7 However, there were large differences among specific roles, as visualized in table 1. These results are calculated by summing up the errors for each of the 10 test folds. This is the default setting for the randomForest package, but this amount can be decreased to as low as 250 without having a large negative effect on labeling accuracy (0.756, or -0.1%). 7 While this set of roles is quite fine-grained, a reduction in the number of roles did not have a large effect on accuracy: when I merged some less frequent roles with more frequent ones ('condition' to 'respect', 'extent of space' to 'location', 'frequency' and 'time frame' to 'time', 'intermediary' and 'value' to 'instrument', 'material' to 'source', 'modality' to 'manner', 'property' to 'possessor' and 'result' to 'goal', reducing the amount of roles to 19 from 29), accuracy only increased with 1.1% point (0.768). This is probably because these roles, while semantically quite similar, typically use other formal means in Greek to express them (e.g. 'time frame' is typically expressed by the genitive, but 'time' by the dative). In general low recall scores for a specific role can be explained by a lack of training examples: roles that had very little training data such as condition (only 5 instances), property (6 instances) and result (15 instances) expectedly had very low recall scores (0 for condition and property, and 0.133 for result). Figure 1 plots the recall score of each role as a function of the (logarithmically scaled) token frequency of the role in the training data, showing that the amount of training examples is one of the main factors explaining the performance of each role. Figure 2 shows a confusion matrix detailing how often each role (\"Reference\") got labeled as another role (\"Prediction\"). Next, we can estimate the effect of each variable by testing how well the classifier performs when leaving certain variables out of the model. 8 As can be inferred from table 2, there were only two features that had a substantial effect on the overall model accuracy: the word vectors (-8% accuracy when left out) and the syntactic label (-2.4% accuracy when left out). Lemmas, morphology, animacy and part-of-speech were less essential, as the accuracy decreases less than half a percentage point when either of them (or all of them) is left out. Probably the information that is contained in the lemma, animacy and part-of-speech features is already largely contained in the word vectors, 8 I did not test leaving out the three variables indicating the form of the construction since I considered them essential for the classification task. The variable importances calculated by the random forest also indicate that these variables are by far the most important (in the order \"combined preposition/case\" > \"preposition\" > \"case\"). While including a feature \"combined preposition/case\" might seem superfluous, considering that the regression trees are able to model the interaction between them natively, when it is excluded there is a relatively big drop in accuracy, from 0.757 to 0.726 (-3.1%). Presumably due to the low amount of training data and the large feature space, the data often gets partitioned into too small groups during the construction of the tree so that this interaction effect is not modelled (see also As for part-of-speech differences, interrogative pronouns (accuracy 0.893; however, 3/4 of examples are the form \u03c4\u03af t\u00ed \"why\"), adverbs (0.822) and personal pronouns (0.807) did particularly well, while relative pronouns (0.528), articles (0.616), numerals (0.629, but only 35 examples) and infinitives (0.667) did rather badly. The results of relative pronouns are not particularly surprising, since they are inherently anaphoric: therefore it would likely be better to model them by the vector of their antecedent (which is directly retrievable from the syntactic tree) rather than the \"meaningless\" vector of the lemma \u1f45\u03c2 (h\u00f3s: \"who, which\"). As for infinitives, the issue might be that they are modelled with the same vectors as nouns, while their usage is quite different: in sentence (1), for instance, whether the lemma of the infinitive is \u03b8\u03bf\u03bb\u03cc\u03c9 (thol\u00f3\u014d: \"disturb\") or any other lemma is irrelevant, and the causative meaning is instead inferred from the verb \u1f10\u03bc\u03ad\u03bc\u03c6\u03b5\u03c4\u03bf (em\u00e9mpheto: he reproached) combined with the \u1f10\u03c0\u03af + dative (\u00e9pi: \"because of\") infinitive construction (in the future it might therefore be better to model infinitive arguments with a singular vector generalizing over all occurrences of an infinitive). Similarly, articles are modelled with the vector of the lemma \u1f41 (ho: \"the\"), which covers all usages of this lemma, while the (dominant) attributive usage is quite different from its pronominal usage (as a verbal argument): therefore restricting the vector of \u1f41 to pronominal uses might also help performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 1494,
"end": 1502,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1748,
"end": 1756,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and analysis",
"sec_num": "4."
},
{
"text": "( On the other side of the spectrum are poetic texts, which often express their semantic roles with other formal means than prose texts (which are the majority of the training data), and philosophical and rhetorical texts, which use relatively abstract language (see also below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "Moving towards a more detailed analysis of the results, the following will give a short overview of the specific problems associated with some roles that turned out to be especially problematic. As for condition, property, result and modality, which all had recall scores of less than 0.3, there are simply not enough training tokens in the data to make any conclusions about the performance of these roles (5, 6, 15 and 17 respectively). Intermediary and material did perform relatively well, on the other hand (recall of 0.688 and 0.727), even though they do not have that many training examples either (16 and 22 respectively). However, they are rather uniformly represented in the training data: each example of \"intermediary\" that was classified correctly was encoded by \u03b4\u03b9\u03ac + genitive (di\u00e1: \"through\") and had either the verb \u03b3\u03c1\u03ac\u03c6\u03c9 (gr\u00e1ph\u014d: \"write\"), \u03ba\u03bf\u03bc\u03af\u03b6\u03c9 (kom\u00edz\u014d: \"bring\") or \u03c0\u03ad\u03bc\u03c0\u03c9 (p\u00e9mp\u014d: \"send\") with it, while every single example of \"material\" that was classified correctly was a genitive object of either \u03c0\u03af\u03bc\u03c0\u03bb\u03b7\u03bc\u03b9 (p\u00edmpl\u0113mi) or \u1f10\u03bc\u03c0\u03af\u03bc\u03c0\u03bb\u03b7\u03bc\u03b9 (emp\u00edmpl\u0113mi) \"fill\". Because of this large level of uniformity, their relatively high performance with respect to their token frequency is not particularly surprising. Extent of space, on the other hand, did quite bad even when its frequency of 67 training examples is taken into account, as can be seen on figure 1. From the confusion matrix in figure 2, we can see that it was, unsurprisingly, most commonly misclassified as \"location\" (almost half of all cases) and, to a much lower extent, \"direction\" and \"cause\". One of the difficulties is that most expressions that can be used to express this role can also express a location: e.g. \u03b4\u03b9\u03ac with the genitive (di\u00e1: \"through\"), \u1f10\u03c0\u03af with the accusative (ep\u00ed \"at, to\"), \u03ba\u03b1\u03c4\u03ac with the accusative (kata: \"along\") and so on (sometimes this role was also misclassified as \"location\" in the data, which obviously did not help the learning or evaluation process). As an additional difficulty, the lemmas used with this role do not for voice of the verb, this can probably be explained because I did not label subjects, making the number of roles where this would be a factor relatively limited (mainly \"agent\" and possibly \"experiencer\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "substantially differ from the lemmas typically used for the role \"location\" (e.g. lemmas such as \u1f00\u03b3\u03bf\u03c1\u03ac agor\u00e1 \"market\", \u03b3\u1fc6 g\u1ebd \"land\" etc.). Instead it is typically an interaction of the meaning of the verb and the form of the construction that determines that the semantic role should not be \"location\" but \"extent of space\", which is likely too difficult to learn with the limited amount of training examples for this role. Similar problems arise for the roles time frame and frequency, which are often expressed with the same argument lemmas as \"time\" and therefore are often confused with this role: however, the degree of confusion is less than with \"extent of space\", likely because the formal means to express these roles are quite different from the ones used to express \"time\" (e.g. time frame is mostly expressed with the genitive, while time is rarely so; frequency uses several adverbs such as \u03c0\u03bf\u03bb\u03bb\u03ac\u03ba\u03b9\u03c2 poll\u00e1kis \"frequently\", \u03b4\u03af\u03c2 d\u00eds \"twice\" etc. that can only express this role). More training examples would probably be beneficial in these cases: while source and direction, for instance, are also often used with the same arguments as \"location\", their recall scores are quite high, likely because they have many training examples to learn from (803 and 1006 respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "Moving to the more frequent roles, there were three roles in particular that received a wrong classification quite frequently even with a relatively high amount of training examples: comparison, experiencer and goal. As for comparison, one problem is that there are a wide range of formal means to express this role: 21 in total, which is on the high side, considering that the median role only has 12 formal means and that there is only an average amount of training examples for this role (198 in total). Another problem is that unlike for roles such as \"time\" and \"location\", the argument of the verb can be almost any lemma (and, when it is used in an adverbial relationship, the verb itself as well): if we look at sentence 2, for instance, neither the verb \u1f14\u03c7\u03c9 (\u00e9kh\u014d: \"have\") nor the noun \u1f04\u03bd\u03b8\u03c1\u03c9\u03c0\u03bf\u03c2 (\u00e1nthr\u014dpos: \"human\") is particularly useful to identify the role of \u1f00\u03bd\u03c4\u03af (ant\u00ed: \"instead\"): instead \u1f00\u03bd\u03c4\u03af functions more as a \"mediator\" between \u03ba\u03c5\u03bd\u03bf\u03ba\u03ad\u03c6\u03b1\u03bb\u03bf\u03c2 (kunok\u00e9phalos: \"baboon\") and \u1f04\u03bd\u03b8\u03c1\u03c9\u03c0\u03bf\u03c2. Involving not only the verb but also its dependents would help in this case, but since the comparative construction can refer to any element in the sentence this problem is rather complicated (and might be more appropriate to solve at the parsing stage).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "( The experiencer role is most often confused with the beneficiary/maleficiary role. This happens in particular when this role receives the label ADV \"adverbial\" (recall 0.173) rather than OBJ \"complement\" (recall 0.817). In this case both \"beneficiary\" and \"experiencer\" refer to a person who is affected in some way by the action of the main verb, and the difference between being advantaged or disadvantaged by an action and being affected by it is often only subtle (and sometimes also inconsistently annotated). In sentence 3, for instance, \u03c3\u03bf\u03af (so\u00ed \"for you\") has been labeled as an experiencer, but might also be considered a beneficiary: \"the rest is according to your wishes for your benefit\". In general verbs that denote an action that have clear results (e.g. \u03c0\u03bf\u03b9\u03ad\u03c9 poi\u00e9\u014d \"make\", \u03c0\u03b1\u03c1\u03b1\u03c3\u03ba\u03b5\u03c5\u03ac\u03b6\u03c9 paraskeu\u00e1z\u014d \"prepare\" etc.) would be more likely to have a beneficiary rather than an experiencer adverbial, but more training data is likely needed to learn this subtle difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "( If (\u2026) the rest is according to your wishes for you, it would be good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "Finally, as for goal, its large amount of confusion with roles such as \"cause\" or \"respect\" is not very surprising, as they are expressed by similar argument lemmas. However, the role is also frequently confused with roles such as \"direction\" and \"location\" (to a lesser extent). While the same formal means are often used to express goals and directions (e.g. \u03b5\u1f30\u03c2/\u03ba\u03b1\u03c4\u03ac/\u1f10\u03c0\u03af/\u03c0\u03c1\u03cc\u03c2 + accusative), one would expect directions to be used predominantly with concrete objects and goals with non-concrete objects. However, in general non-concrete objects do perform quite badly: their accuracy is only 0.655, as opposed to 0.744 for all nouns in general. This might suggest that these nouns are not that well modelled by their distributional vector (which we also found to some extent in Keersmaekers and Speelman to be submitted), although other explanations (e.g. non-concrete objects typically receiving roles that are harder to model in general) are also possible. Other than that, there was also a large influence of the syntactic label: the recall of goals that had the label ADV was 0.493 while it was only 0.111 for the label OBJand 35/48 of the goals that were misclassified as direction had the label \"OBJ\": this is consistent with the fact that goals predominantly have the ADV label (80%) while directions predominantly have OBJ (83%), and some of the goals that were classified as OBJ were in fact misclassifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision",
"sec_num": null
},
{
"text": "This paper has described a first approach to automatic semantic role labeling for Ancient Greek, using a Random Forest classifier trained with a diverse range of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "While the amount of training data was relatively low (only about 12500 tokens for 29 roles), the model was still able to receive a classification accuracy of about 76%. The most helpful features were distributional semantic vectors, created on a large corpus of 37 million tokens, while other features (lemmas, morphology, animacy label, part-ofspeech) did not contribute as much. Probably it is exactly this small amount of training samples that explains why these vectors are so important: since there are a large amount of lemmas in the training data (about 2700 argument lemmas and 1900 verb lemmas), the model is able to reduce this variation by assigning similar vectors to semantically similar lemmas. The distinctions that features such as morphology are able to make (e.g. the role agent as expressed by \u1f51\u03c0\u03cc hup\u00f3 \"by\" with the genitive is rare with active verbs) might be too subtle, on the other hand, to be statistically picked up by the model with the relatively low training examples we have, and therefore these features would perhaps be more helpful when there is more data to learn from.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "An in-depth error analysis reveals a number of ways for further improvement. First of all, the most important step would be expanding the amount of training data, since there is an obvious correlation between the amount of training examples and the performance of each role. Secondly, while the distributional semantic approach works well for most words, some categories (e.g. relative pronouns, infinitives) are not modelled that well and might require a special treatment. Thirdly, non-concrete words turned out to be particularly problematic, and need to be investigated in more detail (particularly by examining if their meaning is modelled well by their semantic vector). Finally, the syntactic relation (adverbial or complement) was also relatively influential in the model, and some wrongly classified instances had in fact received the wrong syntactic label. Therefore improving the syntactic data with regards to this distinction would also likely improve results, especially when moving from manually disambiguated syntactic data (as used in this paper) to automatically parsed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "The semantic role labeling system used in this paper, as well as the training data on which the system was trained (including all modifications of existing treebanks) is available on GitHub. 10 Hopefully this will encourage corpus annotators to add a semantic layer to their project (since there is already an automatically annotated basis to start from), so that their data can also be integrated in the system and results can be further improved. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "While I am planning to include nominatives and accusatives in future versions of the labeler, this was not possible at this moment because none of the projects I included annotated them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While the AGDT treebank is also available in the Universal Dependencies project, I used their original version (in the style of the Prague Dependency Treebank) to ensure compatibility with the other projects included.3 This is the DepHeadChild model in the Keersmaekers and Speelman (to be submitted) paper.4 This \"latent meaning\" simply refers to the fact that several context features tend to be highly correlated: e.g. a word such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the variable importances, gender and number of the argument of the verb were considered to be the most important, while in particular person, number and voice of the verb ranked lower than any other feature (including any of the 100 vector elements). As",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "I combined these two roles because they were not distinguished in the data, but since some prepositions (e.g. \u1f51\u03c0\u03ad\u03c1 + genitive) can only be used for a beneficiary, while others (e.g. \u03ba\u03b1\u03c4\u03ac + genitive) only for a maleficiary, in the future it might be better to keep them apart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Example Agent (364 instances) \u03b4\u03cd\u03bf \u03b4\u1f72 \u03c0\u03b1\u1fd6\u03b4\u03b5\u03c2 \u1f51\u03c0\u1f78 \u03bc\u03b7\u03c4\u03c1\u1f78\u03c2 \u03c4\u03c1\u03b5\u03c6\u03cc\u03bc\u03b5\u03bd\u03bf\u03b9 \"Two children being raised by their mother\" Beneficiary/Maleficiary 11 (715 instances) \u1f51\u03c0\u1f72\u03c1 \u03c4\u1fc6\u03c2 \u03c0\u03b1\u03c4\u03c1\u03af\u03b4\u03bf\u03c2 \u1f00\u03c0\u03bf\u03b8\u03b1\u03bd\u03b5\u1fd6\u03bd \u03b4\u03c5\u03bd\u03ae\u03c3\u03bf\u03bc\u03b1\u03b9 \"I will be able to die for my native land\" Cause (753 instances) \u1f10\u03ba\u03c0\u03bb\u03b1\u03b3\u1ff6\u03bd \u03b4\u03b9\u1f70 \u03c4\u1f78 \u03c0\u03b1\u03c1\u03ac\u03b4\u03bf\u03be\u03bf\u03bd \u03c4\u1fc6\u03c2 \u1f44\u03c8\u03b5\u03c9\u03c2 \"Being struck by the incredibility of the sight\" Companion (424 instances) \u03c4\u03bf\u1fe6\u03c4\u03bf\u03bd \u03bc\u03b5\u03c4\u1f70 \u03a3\u03b9\u03c4\u03ac\u03bb\u03ba\u03bf\u03c5\u03c2 \u1f14\u03c0\u03b9\u03bd\u03bf\u03bd \u03c4\u1f78\u03bd \u03c7\u03c1\u03cc\u03bd\u03bf\u03bd \"During that time I was drinking with Sitalces\" Comparison (198 instances) \u03c0\u03ac\u03bd\u03c4\u03b1 \u1f10\u03bf\u03b9\u03ba\u03cc\u03c4\u03b5\u03c2 \u1f00\u03bd\u03b8\u03c1\u03ce\u03c0\u03bf\u03b9\u03c2 \u03c0\u03bb\u1f74\u03bd \u03c4\u1fc6\u03c2 \u03ba\u03cc\u03bc\u03b7\u03c2 \"Completely looking like humans except for their hair\" Condition (5 instances) \u03ba\u03b5\u03bb\u03b5\u03cd\u03bf\u03bd\u03c4\u03bf\u03c2 \u1f10\u03c0' \u03b1\u1f50\u03c4\u03bf\u03c6\u03ce\u03c1\u1ff3 \u03c4\u1f78\u03bd \u03bc\u03bf\u03b9\u03c7\u1f78\u03bd \u03ba\u03c4\u03b5\u03af\u03bd\u03b5\u03c3\u03b8\u03b1\u03b9 \"Commanding that an adulterer should be killed in case he is caught\" Degree \u1f14\u03c3\u03c4\u03b1\u03b9 \u03c4\u1fc7 \u03a3\u03b1\u03c1\u03c1\u03b1 \u03c5\u1f31\u03cc\u03c2 \"Sara will have a son\" (lit. \"There will be a son to Sara\") Property (6 instances) \u1f45 \u1f26\u03bd \u1f00\u03b3\u03b1\u03b8\u03bf\u1fe6 \u03b2\u03b1\u03c3\u03b9\u03bb\u03ad\u03c9\u03c2 \"What is typical of a good king\" Recipient (1289 instances) \u03c4\u1f70 \u1f31\u03bc\u03ac\u03c4\u03b9\u03b1 \u03b1\u1f50\u03c4\u03bf\u1fe6 \u1f14\u03b4\u03c9\u03ba\u03b5\u03bd \u03c4\u1ff7 \u0391\u1f30\u03c3\u03ce\u03c0\u1ff3 \"He gave Aesop his clothes\" Respect (800 instances) \u03bc\u03ae\u03c4\u03b5 \u1f00\u03bb\u03b3\u03b5\u1fd6\u03bd \u03ba\u03b1\u03c4\u1f70 \u03c3\u1ff6\u03bc\u03b1 \u03bc\u03ae\u03c4\u03b5 \u03c4\u03b1\u03c1\u03ac\u03c4\u03c4\u03b5\u03c3\u03b8\u03b1\u03b9 \u03ba\u03b1\u03c4\u1f70 \u03c8\u03c5\u03c7\u03ae\u03bd \"Neither having pain in the body neither being disturbed in the soul\" Result (15 instances) \u03c6\u03b1\u03af\u03bd\u1fc3 \u03b5\u1f30\u03c2 \u03bc\u03b1\u03bd\u03af\u03b1\u03bd \u1f10\u03bc\u03c0\u03b5\u03c0\u03c4\u03c9\u03ba\u03ad\u03bd\u03b1\u03b9 \"You seem to be fallen into madness\" Source (803 instances) \u1fe5\u03af\u03c0\u03c4\u03b5\u03b9 \u03b4\u1f72 \u03b1\u1f50\u03c4\u1f78\u03bd \u1f10\u03be \u03bf\u1f50\u03c1\u03b1\u03bd\u03bf\u1fe6 \u0396\u03b5\u1f7a\u03c2 \"Zeus threw him from Heaven\" Time (943 instances) \u03c4\u03b5\u03c4\u03ac\u03c1\u03c4\u1ff3 \u03c4\u03b5 \u03ba\u03b1\u03af \u03b5\u1f30\u03ba\u03bf\u03c3\u03c4\u1ff7 \u03c4\u1fc6\u03c2 \u03b2\u03b1\u03c3\u03b9\u03bb\u03b5\u03af\u03b1\u03c2 \u1f14\u03c4\u03b5\u03b9 \u03bd\u03cc\u03c3\u1ff3 \u03b4\u03b9\u03b5\u03c6\u03b8\u03ac\u03c1\u03b7 \"He died from disease in the twenty-fourth year of his reign\" Time frame (45 instances) \u03bc\u03b7\u03b4\u02bc \u03b5\u1f30\u03bb\u03b7\u03c6\u03ad\u03bd\u03b1\u03b9 \u03bc\u03b7\u03b8\u1f72\u03bd \u1f10\u03bd\u03b9\u03b1\u03c5\u03c4\u03bf\u1fe6 \"Not receiving anything over the course of the year\" Totality (150 instances) \u1f11\u03c0\u03b9\u03bb\u03b1\u03bc\u03b2\u03ac\u03bd\u03b5\u03c4\u03b1\u03b9 \u03c4\u1fc6\u03c2 \u03c7\u03b5\u03b9\u03c1\u1f78\u03c2 \u03b1\u1f50\u03c4\u1fc6\u03c2 \"He took her by the hand\" Value (57 instances) \u1f11\u03be\u03ae\u03ba\u03bf\u03bd\u03c4\u03b1 \u03b4\u03b7\u03bd\u03b1\u03c1\u03af\u03c9\u03bd \u03c4\u03bf\u1fe6\u03c4\u03bf\u03bd \u1f20\u03b3\u03cc\u03c1\u03b1\u03ba\u03b1 \"I've bought him for sixty denarii\" Table 4 : Pedalion semantic roles",
"cite_spans": [],
"ref_spans": [
{
"start": 1705,
"end": 1712,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Role",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "C",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "J",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, C.F., Fillmore, C.J. and Lowe, J.B. (1998). The Berkeley FrameNet Project. In Proceedings of the 17th International Conference on Computational Linguistics Volume 1, pages 86-90, Montreal, Quebec, Canada, august. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An Ownership Model of Annotation: The Ancient Greek Dependency Treebank",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Mambrini",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Crane",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories (TLT 8)",
"volume": "",
"issue": "",
"pages": "5--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, D., Mambrini, F. and Crane, G. (2009). An Ownership Model of Annotation: The Ancient Greek Dependency Treebank. In Marco Passarotti, Adam Przepi\u00f3rkowski, Savina Raynaud, Frank Van Eynde, editors, Proceedings of the Eighth International Workshop on Treebanks and Linguistic Theories (TLT 8), pages 5-15, Milan, Italy, december. EDUCatt.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Digital Marmor Parium. Presented at the Epigraphy Edit-a-thon. Editing Chronological and Geographic Data in Ancient Inscriptions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Berti",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berti, M. (2016). The Digital Marmor Parium. Presented at the Epigraphy Edit-a-thon. Editing Chronological and Geographic Data in Ancient Inscriptions, Leipzig.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "randomForest: Breiman and Cutler's Random Forests for Classification and Regression",
"authors": [
{
"first": "L",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cutler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Liaw",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wiener",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Breiman, L., Cutler, A., Liaw, A., and Wiener, M. (2018). randomForest: Breiman and Cutler's Random Forests for Classification and Regression. https://CRAN.R- project.org/package=randomForest.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic Role Annotation in the Ancient Greek Dependency Treebank",
"authors": [
{
"first": "G",
"middle": [
"G A"
],
"last": "Celano",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Crane",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourteenth International Workshop on Treebanks and Linguistic Theories (TLT14)",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Celano, G.G.A. and Crane, G. (2015). Semantic Role Annotation in the Ancient Greek Dependency Treebank. In Marcus Dickson, Erhard Hinrichs, Agnieszka Patejuk, Adam Przepi\u00f3rkowski, editors, Proceedings of the Fourteenth International Workshop on Treebanks and Linguistic Theories (TLT14), pages 26-34, Warsaw, Poland, december. Institute of Computer Science, Polish Academy of Sciences.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sintaxis Del Griego Cl\u00e1sico",
"authors": [
{
"first": "E",
"middle": [],
"last": "Crespo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Conti",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Maquieira",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crespo, E., Conti, L., and Maquieira, H. (2003). Sintaxis Del Griego Cl\u00e1sico. Madrid: Gredos.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Indexing by Latent Semantic Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., and Harshman, R. (1990). Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391-407.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, D. and Jurafsky, D. (2002). Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245- 288.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On Classification Trees and Random Forests in Corpus Linguistics: Some Words of Caution and Suggestions for Improvement",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gries",
"suffix": ""
}
],
"year": 2019,
"venue": "Corpus Linguistics and Linguistic Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gries, S. (2019). On Classification Trees and Random Forests in Corpus Linguistics: Some Words of Caution and Suggestions for Improvement. Corpus Linguistics and Linguistic Theory.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Creating a Parallel Treebank of the Old Indo-European Bible Translations",
"authors": [
{
"first": "D",
"middle": [
"T T"
],
"last": "Haug",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "J\u00f8hndal",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second Workshop on Language Technology for Cultural Heritage Data",
"volume": "",
"issue": "",
"pages": "27--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haug, D.T.T. and J\u00f8hndal, M. (2008). Creating a Parallel Treebank of the Old Indo-European Bible Translations. In Caroline Sporleder and Kiril Ribarov (Conference Chairs), et al., editors, Proceedings of the Second Workshop on Language Technology for Cultural Heritage Data (LaTeCH 2008), pages 27-34, Marrakech, Morocco, may. European Language Resource Association (ELRA).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep Semantic Role Labeling: What Works and What's next",
"authors": [
{
"first": "L",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "473--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He, L., Lee, K., Lewis, M., and Zettlemoyer, L. (2017). Deep Semantic Role Labeling: What Works and What's next. In Regina Barzilay, Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473-483, Vancouver, Canada, july- august. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Creating a Richly Annotated Corpus of Papyrological Greek: The Possibilities of Natural Language Processing Approaches to a Highly Inflected Historical Language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Keersmaekers",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keersmaekers, A. (2019). Creating a Richly Annotated Corpus of Papyrological Greek: The Possibilities of Natural Language Processing Approaches to a Highly Inflected Historical Language. Digital Scholarship in the Humanities.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Creating, Enriching and Valorizing Treebanks of Ancient Greek",
"authors": [
{
"first": "A",
"middle": [],
"last": "Keersmaekers",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Mercelis",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Swaelens",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Van Hal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019)",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keersmaekers, A., Mercelis, W., Swaelens, C., and Van Hal, T. (2019). Creating, Enriching and Valorizing Treebanks of Ancient Greek. In Marie Candito, Kilian Evang, Stephan Oepen, Djam\u00e9 Seddah, editors, Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019), pages 109-117, Paris, France, august. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Applying Distributional Semantic Models to a Historical Corpus of a Highly Inflected Language: The Case of Ancient Greek",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Applying Distributional Semantic Models to a Historical Corpus of a Highly Inflected Language: The Case of Ancient Greek.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon",
"authors": [
{
"first": "Kipper",
"middle": [],
"last": "Schuler",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kipper Schuler, K. (2005). VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon. Dissertation in Computer and Information Science. University of Pennsylvania.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1506--1515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcheggiani, D. and Titov, I. (2017). Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling. In Martha Palmer, Rebecca Hwa, Sebastian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515, Copenhagen, Denmark, september. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semantic Role Labeling: An Introduction to the Special Issue",
"authors": [
{
"first": "L",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "K",
"middle": [
"C"
],
"last": "Litkowski",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "2",
"pages": "145--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00e0rquez, L., Carreras, X., Litkowski, K.C., and Stevenson, S. (2008). Semantic Role Labeling: An Introduction to the Special Issue. Computational Linguistics, 34(2):145-159.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic Role Labeling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, M., Gildea, D., and Xue, N. (2010). Semantic Role Labeling. Morgan & Claypool.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "RSpectra: Solvers for Large-Scale Eigenvalue and SVD Problems",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Guennebaud",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Niesen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiu, Y., Mei, J., Guennebaud, G., and Niesen, J. (2019). RSpectra: Solvers for Large-Scale Eigenvalue and SVD Problems. https://CRAN.R- project.org/package=RSpectra.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "R: A language and environment for statistical computing. R Foundation for Statistical Computing",
"authors": [
{
"first": "",
"middle": [],
"last": "R Core Team",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Core Team. (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "From Frequency to Meaning: Vector Space Models of Semantics",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P.D. and Pantel, P. (2010). From Frequency to Meaning: Vector Space Models of Semantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Reconciling the Dynamics of Language with a Grammar Handbook: The Ongoing Pedalion Grammar Project",
"authors": [
{
"first": "T",
"middle": [],
"last": "Van Hal",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ann\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Digital Scholarship in the Humanities",
"volume": "32",
"issue": "2",
"pages": "448--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Hal, T. and Ann\u00e9, Y. (2017). Reconciling the Dynamics of Language with a Grammar Handbook: The Ongoing Pedalion Grammar Project. Digital Scholarship in the Humanities, 32(2):448-454.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Diorisis Ancient Greek Corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vatri",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Mcgillivray",
"suffix": ""
}
],
"year": 2018,
"venue": "Research Data Journal for the Humanities and Social Sciences",
"volume": "3",
"issue": "1",
"pages": "55--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vatri, A. and McGillivray, B. (2018). The Diorisis Ancient Greek Corpus. Research Data Journal for the Humanities and Social Sciences, 3(1):55-65.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "End-to-End Learning of Semantic Role Labeling Using Recurrent Neural Networks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1127--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, Y. and Xu, W. (2015). End-to-End Learning of Semantic Role Labeling Using Recurrent Neural Networks. In Chengqing Zong, Michael Strube, editors, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1127-1137, Beijing, China, july. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Lemmatized Ancient Greek Texts. 1.2.5",
"authors": [
{
"first": "Giuseppe",
"middle": [
"G"
],
"last": "Celano",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe G. A Celano. (2017). Lemmatized Ancient Greek Texts. 1.2.5.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Gorman Trees. 1.0.1",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vanessa Gorman. (2019). Gorman Trees. 1.0.1. https://github.com/vgorman1/Greek-Dependency-Trees.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Perseids Project -Treebanked Commentaries at Tufts University",
"authors": [
{
"first": "J",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harrington",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew J. Harrington. (2018). Perseids Project - Treebanked Commentaries at Tufts University. https://perseids-project.github.io/harrington_trees/.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Treebank of Aphtonius' Progymnasmata",
"authors": [
{
"first": "Polina",
"middle": [],
"last": "Yordanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Polina Yordanova. (2018). Treebank of Aphtonius' Progymnasmata. https://github.com/polinayordanova/Treebank-of- Aphtonius-Progymnasmata.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The present study was supported by the Research Foundation Flanders (FWO) [grant number: 1162017N]. Some preliminary work was carried out by Colin Swaelens (whose master thesis I supervised). I would like to thank my PhD supervisors (Toon Van Hal and Dirk Speelman), as well as three anonymous reviewers for their detailed and helpful feedback.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Recall scores for semantic roles as a function of their logarithmically scaled token frequencyFigure 2: Confusion matrix of semantic roles",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF4": {
"content": "<table/>",
"text": "Accuracy when leaving out certain features",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>: Accuracy per genre</td></tr><tr><td>Unsurprisingly, the texts that did well are quite repetitive</td></tr><tr><td>in nature, have a large amount of training examples and use</td></tr><tr><td>an everyday, non-abstract language: religious and</td></tr><tr><td>documentary texts.</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}