ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:47.486804Z"
},
"title": "What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "Wysocki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Manchester",
"location": {}
},
"email": ""
},
{
"first": "Malina",
"middle": [],
"last": "Florea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Manchester",
"location": {}
},
"email": ""
},
{
"first": "D\u00f3nal",
"middle": [],
"last": "Landers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CRUK Manchester Institute",
"location": {
"addrLine": "Cancer Biomarker Centre"
}
},
"email": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Manchester",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the contributions behind SemEval. By understanding the distribution of task types, metrics, architectures, participation and citations over time we aim to answer the question on what is being evaluated by Se-mEval.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the contributions behind SemEval. By understanding the distribution of task types, metrics, architectures, participation and citations over time we aim to answer the question on what is being evaluated by Se-mEval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A large portion of the empirical methods in Natural Language Processing (NLP) are defined over canonical text interpretation tasks such as Named Entity Recognition (NER), Semantic Role Labeling (SRL), Sentiment Analysis (SA), among others. The systematic creation of benchmarks and the comparative performance analysis of resources, representations and algorithms is instrumental for moving the boundaries of natural language interpretation. SemEval (May et al., 2019; Apidianaki et al., 2018; Bethard et al., 2017 Bethard et al., , 2016 Nakov et al., 2015; Nakov and Zesch, 2014; Manandhar and Yuret, 2013; Agirre et al., 2012) is the primary venue in the NLP community for the organisation of shared NLP tasks and challenges. SemEval is organised as an annual workshop co-located with the main NLP conferences and has attracted a large and growing audience of task organisers and participants.",
"cite_spans": [
{
"start": 450,
"end": 468,
"text": "(May et al., 2019;",
"ref_id": null
},
{
"start": 469,
"end": 493,
"text": "Apidianaki et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 494,
"end": 514,
"text": "Bethard et al., 2017",
"ref_id": null
},
{
"start": 515,
"end": 537,
"text": "Bethard et al., , 2016",
"ref_id": "BIBREF4"
},
{
"start": 538,
"end": 557,
"text": "Nakov et al., 2015;",
"ref_id": null
},
{
"start": 558,
"end": 580,
"text": "Nakov and Zesch, 2014;",
"ref_id": null
},
{
"start": 581,
"end": 607,
"text": "Manandhar and Yuret, 2013;",
"ref_id": null
},
{
"start": 608,
"end": 628,
"text": "Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite its recognition as a major driver in the creation of gold-standards and evaluation campaigns, there is no existing meta-analysis which interprets the overall contribution of SemEval as a collective effort. This paper aims to address this gap by performing a systematic descriptive quantitative analysis of 96 tasks encompassing the Se-mEval campaigns between 2012-2019. This study targets understanding the evolution of SemEval over this period, describing the core patterns with regard to task popularity, impact, task format (inputs, outputs), techniques, target languages and evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organised as follows: section 2 describes related work; 3 describes the methodology; 4 defines the underlying task macro-categories; 5 and 6 presents the number of tasks and popularity in 2012-2019; 7 discusses SemEval impact in terms of citations; 8 shows targeted languages; then, sections 9, 10, 11 analyse input, output and evaluation metrics; 11 focuses on sentiment analysis architectures and representations; this is followed by a Discussion section; we close the paper with Recommendations and Conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Each SemEval task is described by an anthology, which contains: a summary of previous editions or similar tasks, references to previous works, detailed task description, evaluation methods, available resources, overview of submitted systems and final results of the competition. It is worth noting, there is a variation, or even inconsistency, in the structure and the level of detail in the description. Participants are also encouraged to submit papers with systems architecture explanations. However, there is a lack of overall analysis across different tasks and years in SemEval. There are existing studies on the analysis of specific SemEval tasks. Nakov et al. (2016) focuses on developing Sentiment Analysis tasks in 2013-2015. Sygkounas et al. (2016) is an example of a replication study of the top performing systems, in this case systems used in SemEval Twitter Sentiment Analysis (2013) (2014) (2015) , and focuses on architectures and performance. Evolution and challenges in semantics similarity were described in Jimenez et al. (2015) . This is an example of a study on the performance of a given type of architecture across tasks of the same type. There also exist studies on shared tasks in given domain, specially in clinical application of NLP (Filannino and Uzuner, 2018) , (Chapman et al., 2011) . However, they refer to tasks outside the SemEval and are more result oriented rather than task organization. Some studies discuss ethical issues in the organisation and participation of shared tasks. An overview focusing on task competitive nature and fairness can be found in Parra Escart\u00edn et al. (2017) . In Nissim et al. (2017) authors also relate to these issues, yet giving the priority to advancing the field over fair competition.",
"cite_spans": [
{
"start": 655,
"end": 674,
"text": "Nakov et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 736,
"end": 759,
"text": "Sygkounas et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 892,
"end": 898,
"text": "(2013)",
"ref_id": null
},
{
"start": 899,
"end": 905,
"text": "(2014)",
"ref_id": null
},
{
"start": 906,
"end": 912,
"text": "(2015)",
"ref_id": null
},
{
"start": 1028,
"end": 1049,
"text": "Jimenez et al. (2015)",
"ref_id": "BIBREF8"
},
{
"start": 1263,
"end": 1291,
"text": "(Filannino and Uzuner, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 1294,
"end": 1316,
"text": "(Chapman et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 1602,
"end": 1624,
"text": "Escart\u00edn et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 1630,
"end": 1650,
"text": "Nissim et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Comparatively, this paper covers a wider range of NLP topics, and compares sentiment analysis and semantic similarity as well as other task types/groups in a systematic manner. To the best to our knowledge this is the first systematic analysis on SemEval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We build a corpus based on the ACL anthology archive from the SemEval workshops between the years 2012-2019. Reference material included ACL anthology papers covering the task description, tasks' websites and papers describing the participating systems. All the reference papers included in this analysis are reported in the Appendix B. The pre-processing analysis consisted in manually extracting the target categories for the analysis which includes: task types, input and output types, as well as evaluation metrics, number of teams, languages and system architectures. Tasks were grouped based on the similarity between task types. If the same team took part in several tasks the same year, we considered each participation as distinct. There are four missing tasks in the plotted indexes, due to cancellation (2015-task16, 2019-task11), task-sharing (2013-task6) or lack of supporting task description (2013-task14). Numbers of citations are the numbers returned by Google Scholar, using Publish and Perish supporting API (Harzing, 2007) . The list of citations were manu- ally validated and noisy entries were filtered out. A final table with all the values extracted from the corpus is included in the Appendix B.",
"cite_spans": [
{
"start": 1027,
"end": 1042,
"text": "(Harzing, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis methodology",
"sec_num": "3"
},
{
"text": "Based on task description we group each task within a macro-category. Then, due to a large number of task types, tasks were clustered within 6 groups: Sentiment Analysis (SA); Semantic Analysis (SEM): semantic analysis, semantic difference, semantic inference, semantic role labeling, semantic parsing, semantic similarity, relational similarity; Information Extraction (IE): information extraction, temporal information extraction, argument mining, fact checking; Machine Translation (MT); Question Answering (QA); Other (OT): hypernym discovery, entity linking, lexical simplification, word sense disambiguation, taxonomy extraction, taxonomy enrichment. There are also macro-categories defined by the SemEval organizers, starting from 2015, but we found them not consistent enough for the purpose of this analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task types and groups",
"sec_num": "4"
},
{
"text": "Within 8 editions of SemEval, a total of 96 tasks were successfully announced. The number of tasks within one group is roughly similar every year (except for MT), as well as distribution of tasks in each edition. According to Fig.1a , we observe decreasing number of SEM tasks: 5 on average in 2012-2017, and only 2 in 2018-2019. Moreover, there were no machine translation tasks in the last 2 years, and a low number of MT tasks in general (only 4 tasks in 8 years).",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 232,
"text": "Fig.1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "Although SA has a relatively limited task complexity when compared to SEM or IE, which reflects a higher variety of task types and an abundance of more specific interpretation challenges, the number of SA tasks each year is high (4, 3, 3 and 4 in years 2016-2019). It is worth mentioning, that there are other 6 SA tasks in the forthcoming SemEval 2020. The absence of some task types may be caused by the emergence of specialized workshops or conferences, e.g. low number of MT tasks in SemEval is caused by the presence a separate venue for MT: the Conference On Machine Translation (Barrault et al., 2019) , which attracts more participants than SemEval in this field. 6 Task popularity As a measure of task popularity, we analysed how many teams participated in a given task. As the number of teams signed up to the task is usually much higher than the number submitting a system result, we consider only the latter.",
"cite_spans": [
{
"start": 585,
"end": 608,
"text": "(Barrault et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "The number of teams increased significantly from 62 in 2012 to 646 in 2019, which shows not only a popularity boost for SemEval, but an increase in the general interest for NLP. So far, a total of 1883 teams participated in this period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "In Fig.1b , we observe a gradual increase in Se-mEval popularity, 30% on average each year to 2018, with a +129% jump in 2019. This is associated mainly with a dramatic increase of interest for SA: 542 teams (84% of total) in 2019. However, at the same time, number of teams in non-SA tasks decreased from 132 in 2017, to 115 in 2018 and 104 in 2019.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig.1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "The most popular tasks groups along the years are SA and SEM, which gather more than 75% of teams on average each year. The third most popular is IE, in which total of 235 teams participated in SemEval from 2012 (12% of total). As a contrast, we observe a relatively low interest in QA and OT tasks. Only 41 teams participated in the last 3 years (3% of a total of 1148 in 2017-2019). Especially in OT tasks, which concentrates novel tasks, in many cases including novel formats.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "In the last 2 years, SA shows a major increase in popularity (76% of all teams, compared to 40% in 2013-2017). At the same time, in tasks such as 2019-10, 2018-4 and 2018-6, which are mathematical question answering, entity linking on multiparty dialogues and parsing time normalization, respectively, only 3, 4 and 1 teams submitted results. This divergence may be associated with an emergence of easily applicable ML systems and libraries, which better fit to standard classification tasks more prevalent in SA (in contrast to OT, QA nor IE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SemEval tasks in years",
"sec_num": "5"
},
{
"text": "As a measure of the impact of SemEval outcomes in the NLP community, we analysed the numbers of citations per task description in Google Scholar. The task description paper was used as a proxy to analyse the task impact within the NLP community. Papers submitted by participating teams describing systems and methods were not included on this analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "We considered the cumulative citations from 2012 to 2019 ( Fig.2a) , with additional distinction on citations of task description papers published in a given year (Fig.3a) . Citations within SemEval proceedings were treated separately, as we focused on the impact both outside ( Fig.2a ) and inside ( Fig.2b ) the SemEval community. In other words, citations found in Google Scholar are split into numbers of papers out and in the SemEval proceedings.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Fig.2a)",
"ref_id": "FIGREF1"
},
{
"start": 163,
"end": 171,
"text": "(Fig.3a)",
"ref_id": "FIGREF2"
},
{
"start": 279,
"end": 285,
"text": "Fig.2a",
"ref_id": "FIGREF1"
},
{
"start": 301,
"end": 307,
"text": "Fig.2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "SA and SEM have the highest impact, being the most cited tasks along the years both inside and outside SemEval community, what can be attributed to their high popularity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "Considering the external impact, in 2019 SA and SEM anthologies contributed with 2847 (41%) and 2426 (35%) citations respectively. IE has 985 citations (14%) and QA contributed with 148 citations (2%). The OT group, which consists of less canonical tasks, accumulated 468 citations (7%). The impact of MT papers is noticeably lower -84 (1%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "In terms of citations within the SemEval community (in all SemEval 2012-2019 proceedings), we observe a similar pattern: 41% and 37% citations in 2019 come from SA and SEM (357 and 322), and for remaining task groups proportions are almost identical as in citations outside community (Chi.sq. p-value=0.06).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "The number of citations outside is 8 times higher than inside the community. This proves the scientific impact and coverage, which leads to beneficial effect of SemEval on the overall NLP community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "A total of 6958 citations from 2019 are depicted in Fig.3a with distinction on the year in which the task was published (e.g. tasks from 2016 are cited 1682 times (23%)). Similarly, a total of 876 citations in the SemEval proceedings are presented in Fig.3b (e.g. anthologies published in 2015 are cited 163 times in all SemEval proceedings so far). SA tasks from 2016, SEM from 2014 and IE from 2013 have the highest impact compared within groups (40%, 28% and 42% respectively). One could expect higher numbers of citations for older papers, however, we do not observe this pattern.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 58,
"text": "Fig.3a",
"ref_id": "FIGREF2"
},
{
"start": 251,
"end": 257,
"text": "Fig.3b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The impact of SemEval papers",
"sec_num": "7"
},
{
"text": "We analysed SemEval in terms of languages used in the tasks (Fig.4 ). We can distinguish 3 clusters: English-only (except for 3 tasks entirely in Chinese); multi-lingual, which define identical subtasks for several languages; cross-lingual (targeting the use of semantic relation across languages).",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 66,
"text": "(Fig.4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Languages in tasks",
"sec_num": "8"
},
{
"text": "In total of 96 tasks, 30 investigated more than one language (multi-lingual and cross-lingual tasks) and 63 tasks were using only English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages in tasks",
"sec_num": "8"
},
{
"text": "The five most popular languages, excluding English are: Spanish (16), French (10), Italian (10), Arabic (8), German (8). Although Chinese is the 1st language in number of speakers, only 4 tasks were organised for Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages in tasks",
"sec_num": "8"
},
{
"text": "Most of multi-lingual or cross-lingual tasks are related to SA (5 in 2016-2018) or SEM (15 in 2012-2019), and obviously on MT tasks (3 in 2012-2014). There were 3 OT tasks, only one QA task, and no IE tasks. Task 11 in 2017 concerning program synthesis, aiming to translate commands in English into (program) code, attracted only one team.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages in tasks",
"sec_num": "8"
},
{
"text": "In 2018 and 2019 the interest in other languages is lower compared to previous years. Languages other than English were proposed only 5 and 3 times, respectively, whereas in 2016 and 2017 we observed the occurrence of respectively 10 and 14 times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Languages in tasks",
"sec_num": "8"
},
{
"text": "In order to better understand the evolution of the semantic complexity of the tasks, we analysed them in terms of the types used to represent input and output data in all subtasks. Based on their descriptions, we devised a list of 25 different abstract types used, then assigning each subtask the most appropriate Input and Output Types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input and Output Analysis",
"sec_num": "9"
},
{
"text": "Taking into consideration both their complexity and purpose, we split the type list into 5 clusters: cluster 1: document, text, paragraph, sentence, phrase, word, number; cluster 2: score, score real value, score whole value, class label, probability distribution; cluster 3: entity, attribute, topic, tree, Directed Acyclic Graph (DAG); cluster 4: question, answer, query; cluster 5: Knowledge Base (KB), program, time interval, timeline, semantic graph, syntactic labeled sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types and Clusters",
"sec_num": "9.1"
},
{
"text": "As expected, types from cluster 1 (sequential tokens) make up for 76% of overall input types used in all tasks (depicted in the Appendix A, Fig.A.1 ). Most popular input type is paragraph, for which about 60% of cases represents a tweet. The remaining 24% is split across clusters 2, 3, 4 and 5. A subtle divergence towards the right-hand side can be noticed, starting with 2015, driven mostly by tasks from SA and IE task groups. The most dominant Input Types from each cluster are paragraph, class label, entity, question and KB.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Fig.A.1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Input Types",
"sec_num": "9.2"
},
{
"text": "As shown in Fig.5 , data types from clusters 2 and 3 are the majority in this case, accounting for 68% of used representations. Class labels are repeatedly employed, especially by SA tasks. Cluster 1 types are constantly used across the years, fully dependent on the task types given in a certain year, 78% of them coming from SEM, IE and OT. Rarely used are typed from clusters 4 and 5, accounting for just 10% of the total, half of which occur in SEM tasks during 2016 and 2017 complex tasks such as Community Question Answering and Chinese Semantic Dependency Parsing. We also found a possible relation between output type and popularity. In 2012-2017 tasks where outputs were in cluster 4 or 5, attracted 8.3 teams per task on average, while in clusters 1-3 13.9 teams/task. However, despite major increase in SemEval popularity, in 2018-2019 the former attracted only 7 teams/task, and the latter 43.5 teams/task. The group with most type variety is SEM, covering types across all clusters. On the other side of the spectrum, SA has the least variety, despite it being the most popular task group. The most dominant Output Types from each cluster are paragraph, class label, entity, answer and semantic graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 17,
"text": "Fig.5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Output Types",
"sec_num": "9.3"
},
{
"text": "We counted a total of 29 different evaluation metrics used in SemEval (Fig.6) .",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "(Fig.6)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "10"
},
{
"text": "At a subtask level, the most frequent metric is F1-score, with 105 usages, followed by recall and precision, with 51 and 49 usages respectively, and accuracy, with 26 usages. F1, recall and precision are frequently jointly used, the last two playing the role of supporting break-down metrics for F1 in 95% of cases. This combination is very popular, especially for IE tasks, almost half of the use coming from this task group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "10"
},
{
"text": "The top 5 evaluation metrics make up 84% of the total number of metrics used in all years, last 12 (almost half) being only used once. In 89% of cases when rare evaluation metrics (from Kendall's T to the right) are used, they occur in SA and SEM tasks e.g. Jaccard index in Affect in Tweets (2018) or Smatch in Meaning Representation Parsing (2016). Furthermore, 67% of the least used evaluation metrics (only used 3 times or less) appear in 2015-2017, the same period when we could see tasks experimenting the most with input and output types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "10"
},
{
"text": "F1, recall and precision (depicted in Appendix A, Fig.A.2 11 Zooming in into Sentiment Analysis",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Fig.A.2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Metrics against Output Types",
"sec_num": "10.1"
},
{
"text": "The systematic analysis of the prevalent methods and architectures imposed particular challenges with regard to the data extraction process due to the intrinsic complexity of tasks (many systems include the composition of pre-processing techniques, rules, hand-crafted features and combinations of algorithms). Additionally, for the majority of task description papers, there is no systematic comparison between systems within a task, and consequently within group or years. Due to the consistent presence of SA along all years, we present an overview of the evolution of system architectures used in SA from 2013 to 2019 (Fig.7) . In this analysis we focus on the best performing architectures. More than one best model in a task signifies best models in subtasks or that the final system was an ensemble of several algorithms. Regression based model encompasses linear, logistic, or Gaussian regression, and Other includes all rule-based or heavily hand-crafted models.",
"cite_spans": [],
"ref_spans": [
{
"start": 622,
"end": 629,
"text": "(Fig.7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "System architectures",
"sec_num": "11.1"
},
{
"text": "We observe a drift in popularity of architectures from ML algorithms (2013-2016) to deep learning (DL) models (2017-2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System architectures",
"sec_num": "11.1"
},
{
"text": "Despite the major adoption of DL models, traditional ML algorithms are consistently in use, both as separate models and as ensembles with DL. This is also true for other types of tasks. In many task description papers from 2018-2019, one can find ML-based systems as top performing participants. SVM-based models are still popular and in some tasks outperforms DL (2018-2, 2019-5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System architectures",
"sec_num": "11.1"
},
{
"text": "In the analysis of system architectures one needs to take into account that best system depends not only on the core algorithm but also on the team expertise and supporting feature sets and language resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System architectures",
"sec_num": "11.1"
},
{
"text": "The output of the SA related tasks provide an account of the evolution of sentiment and emotion representation in this community from 2013 until 2019 (see Appendix A Fig.A.3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 174,
"text": "A Fig.A.3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "At a discrete level, the number of maximum class labels representing sentiment intensity grew from 3 in 2013 to 7 in 2019. At a continuous score level, real-valued scores associated with sentiment was first used in 2015; in 2016 it switched to sentiment intensity; in 2017 it was being used as a way to determine the intensity of an emotion component out of 11 emotion types (rather than a single one, or the generic emotional intensity of a sentence). In terms of targeted subject, the tasks grew more granular over time: paragraph/word (2013), aspect terms (2014), sentence topic (2015), person (2016). Additionally, discourse evolved from simpler opinionated text in the direction of figurative language, for example: handling irony and metaphor in SA (2015), phrases comparison/ranking in terms of sense of humor (2017), irony detection (2018) and contextual emphasis (2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "12 Discussion: What is SemEval evaluating?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "The results of the analysis substantiate the following core claims, which summarise some of the trends identified in this paper:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 There is evidence of significant impact of Se-mEval in the overall NLP community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 SemEval contributed to the construction of a large and diverse set of challenges with regard to semantic representation, supporting resources and evaluation methodologies and metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 SemEval is becoming heavily biased towards solving classification/regression problems. We observe a major interest in tasks where the expected output is a binary or multiclass label or within a continuous real valued score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 Sentiment Analysis tasks accounts for a disproportional attention from the community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 There are two parallel narratives running on SemEval: low entry barrier and state-of-theart defining. SemEval contains a rich corpus of unaddressed and complex NLP tasks, which are eclipsed by the easier low entry barrier tasks. This points to the double function of SemEval which performs a pedagogical task, serving as an entry point for early career researchers to engage within the NLP community and a state-of-the-art forum for pushing the boundaries of natural language interpretation. With the popularity of NLP applications and Deep Learning, the former function is eclipsing the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 There is a significant trend to decrease the variety in the output and evaluation metrics in the recent years. While in the previous years, tasks focused more on novel and exploratory tasks, recent tasks have explored, probably due to emergence of out-of-the-box DL models, this variety significantly decreased. Consequently, participants focus on easier tasks, which in part dissipates the community potential to address long-term challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "\u2022 Despite the recent interest in neural-based architectures, there is clear evidence of the longevity and lasting impact of older NLP methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "11.2"
},
{
"text": "We believe that this paper can serve as a guideline for the selection and organisation of future SemEval tasks. Based on the analyses performed on this paper, these are the main recommendations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations",
"sec_num": "13"
},
{
"text": "\u2022 Prioritise tasks which have a clear argument on semantic and methodological challenges and novelty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations",
"sec_num": "13"
},
{
"text": "\u2022 Differentiate challenges which have a competition/pedagogical purpose from research tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations",
"sec_num": "13"
},
{
"text": "\u2022 Support the systematic capture of task metadata and submission data in a structured manner. This will allow for an efficient comparison between SemEval tasks and deriving insights for future SemEval editions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations",
"sec_num": "13"
},
{
"text": "This paper reported a systematic quantitative analysis of SemEval, which is an important venue for the empirical evaluation of NLP systems. The analysis, which provides a detailed breakdown of 96 tasks in the period between 2012-2019, provided quantitative evidence that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
},
{
"text": "\u2022 SemEval has a significant impact in the overall NLP community",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
},
{
"text": "\u2022 there is a recent drift towards the direction of Deep Learning classification methods which is eclipsing the research function of SemEval",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
},
{
"text": "\u2022 there is longevity and impact of older NLP methods in comparison to Deep Learning methods ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
},
{
"text": "\u20ac \u2022\u201a\u0192 \u201a \" \u2026 \u2020 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192OE\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a \u20ac \u2022\u201a\u0192 \u201a \" \u2026 \u201a \u2020 \u2022 \u2039 \u2022 \u2039 ' \u2021 \u2021 \u2021 \u2021\u20ac ' \" \u2021 \u201a\u0192 \u201a\" \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192OE' \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 ' \u0160 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192OEOE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE \u20ac \u2022\u201a\u0192 \u201a \" \u2026 OE \u2020 \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 - \u20ac \u2022\u201a\u0192 \u201a \" \u2026 - \u2020 \u2022 \u2039 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192-\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 \u201a \" \u2026 \u2022 \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 ' \u20ac \u2022\u201a\u0192 \u201a \" \u2026 ' \u2020 \u0160 \u2039 \u2020 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192-\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u017d \u20ac \u2022\u201a\u0192 \u201a \" \u2026 \u017d \u2020 \u2039 \u2022 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u201a\u2022 \u0192-\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \u017d \u203a \" \u0160 \u2039 OE \u0160 \" \u2021 \u0160\u201a \u2021 \" \u20ac \u02c6 OE \u2022\u2022 \u2020 \u02c6 \u0160 \u2039 \u2021\u20ac \u2021\u20ac \u02c6 \u20ac \u20ac OE \u20ac \u2022 \u2022 OE ' \u0178 \" \u2030 \" \u2021 \u0192 \u017e \u201a \u201a \u017d \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
},
{
"text": "\u201a - \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2020 \" \u2022\u2022 \u2020 \" \u2021 \u2021 \u2021 \u2021\u20ac OE \" \u2021 \u201a\u0192 \u2022\" \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192\u0192 \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u0192 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u201a \u2020 \"\u2030 \u2039 \u2021\u20ac OE \" \u2021 \u201a\u0192 \u2022\" \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192-\u201a \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 ' \u0160 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192OEOE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 OE \u2020 \u02dc\u2039 \u2039 \u2039 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192\u201a- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 - \u2020 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192\u0192' \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 ' \u2020 \" \" ' \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 - \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u017d \u2020 \u2039 \u2022 \" \u2021 \u2021 \u2021 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192\u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 - \u2020 \u2039 \u2022\u2039 \u2022\u2022\u2039 \u2122 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192-\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 ' \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u0192 \u2020 \u2039 \u2022 \u2039 \u2021 \u2021 \u2021 \u2022 \u2021\u0161 \u2039\u2026 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2021 \u2039\u2026 \u2021 \u2021 \u2026\u02c6 \u2022\u201a\u0192 \u2022\u02c6 \u02c6\u201a-\u203a \u2039 \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u017d \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2020 \u2039 \u2122 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192\u2022- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \u0192\" \" \u2022 \u2026 \u02c6 \" \u0160 - \u02c6 \u0192 OE \u20ac \u20ac \u2039 \u20ac OE \u20ac \u2022 \u2022 \u2039 \" \u2022 \u0178 \u203a\u2022 \u2022\u0160 \u02c6 \" \u017e \u2022 \u2022 \u017d \" \u2030 \u2026 \u02c6\" \u0192 OE\" \" \u2020 -\u2022 \u2021 \u0192 \u017d \" \u20ac \u02c6 \" \u2021\u20ac ' \u2030 \u2022 \u2022 \u2039 \" \u2022 \u2026 \" \u2021 \u2021 \u0192 \u02c6 \u20ac OE \u20ac \u20ac \u2022 \u2022 \u2022 \u2030 \u203a \u017d \u2021 \u2022 \u2021 \u0192 \u20ac \u20ac \u2021\u20ac \u20ac \u017d \" ' \u2030 \u0160 \u2022 \u2022 \u2022 \" \u00a1 \" \u2021 \u2022 \u2022\u2022 \u2021 \" \u2021\u20ac \u02c6 \u20ac \u2021 \" \u2021\u20ac OE \u2022 \u201a \u201a \u2022 \" \u201a \u2022 \u0192 \u20ac \" \u2021 \u20ac \u20ac OE \u20ac \u2022 \u20ac \u2026\" \" \u2022 \" \" \" \u017d ' \u2020 \u02c6 \u2030 \" \u201a \u0192 \u017e \" \u2022 --\" \u2021 \u2030 - \u0160\" \u2039 \" \u2021\u20ac \" \u2021\u20ac OE \u2022 \u0161 \u0161 \u00a1 \u203a\u2030 \u017d \u2026 \u2021 ' \u0161 \u0160\u0161 \u0192 \u017e \u017e \" \u2021 \u20ac \u2030oe \u0161 - \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u201a \u2020 \u2022 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192OE\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a\u0192 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 \u2039 \u2122 \u2039 oe\u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022\u201a\u0192OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a \u20ac \u2022\u201a\u0192 OE \" \u2026 \u2020 \u2021 \u2021 \u2021 \u2021\u20ac \u201aOE \u2021 \u201a\u0192 OE\" \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192 \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a\u201a \u20acoe\u2039 \u2039 \u2122 \u2039 \u2026 \u2022 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a\u2022 \u20ac \u2022\u201a\u0192 OE \" \u2026 \u2022 \u2020 \u2039 \u2022 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201aOE \u20ac \u2022\u201a\u0192 OE \" \u2026 OE \u2020 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192OE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a- \u20ac \u201a\u0192 OE \" \u2026 -\u2022 \u201a \u2039 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a\u2022 \u20ac \u2022\u201a\u0192 OE \" \u2026 \u2022 \u2020 \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a' \u20ac \u2022\u201a\u0192 OE \" \u2026 ' \u2020 \" \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192' \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u201a\u017d \u20ac \u201a\u0192 OE \" \u2026 \u017d \u2020 \u2039 \u2022 \u2039 \u2021 \u2021 \u2021 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192\u017d \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \u2022 \u0160 \u0160 \" \u2021 \u0160 \u2021 \" \u20ac \u02c6 \u20ac OE \u20ac \u2022\" \u2022 \u0192\" \" \" \u2021 \u201a \u2021 \u0192 \u017d \" ' \u2030 \u20ac \u0160 \u201a \u2022 -' \u2021 - \u2021 \u0192 \u017d \" \u20ac \u02c6 ' \u2030 \u20ac OE \u0160 \u201a \u2022 - \" \u2021\u20ac \u20ac ' \u2039 \u02c6 \u2021 \" \u2021 \u0192 \u017d \" ' \u2030 \u20ac \u0160 \u201a \u2022 -\" \u2030 \" \u2021 \u2122\" \" \u201a\" \u02dc\" \u2122\" \u02dc\" \u20ac \u02c6 OE \u20ac \" \u02c6 \u201a \u2022 \u2022 \u2022- -\u2030 \u203a\u2026 \" \u017d \u2022 \u2022 \u2021 \" \u20ac \u2021 OE \u2022 \u201a \u201a \u2022 \u201a-\u2122\" \u017d \u201a \u2022 \u2021 \" \" \u201a \" \u2022 \"-\u2030 \u017d \" \u0161 \u2039 \u20ac OE \u20ac \u201a - \u2022 --\u2026 \u017d \" - \u2039 \u20ac \u2022\u00a1\u20ac \u017d \u20ac \" \u2021 \u2022 \u201a \u0161 \u2022 \u0161- \u017d \u0161 \u2039 \u02c6\" \u20ac \u02c6\" \u20ac \u02c6 \u20ac \u20ac OE \u20ac \u201a \u0160 \u2022 \u0160-\u2030 \u2039' \u017d \u2039 ' \u2021 \u017d \u0160 \u2021 \" \u02c6 \u20ac OE \u20ac \u201a \u2022 - \" \u2021 \u2022 \u201a \u2021 \" \u20ac \u20ac \u017d \" \u20ac \u02c6 \u2022 ' \u20ac OE \u20ac \"\u017d \u2026 \u2022 \u201a \u2022 - \" \u2021 OE \" \u017d \" \u2022 \u2021 \" \u017d \" \u2021 \u201a \u201a- \u20ac \u2022\u201a\u0192 OE \" \u2026 - \u2020 \"\u2030 \u2039 \u2022 \u2021\u20ac \u201aOE \u2021 \u201a\u0192 OE\" \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192\u0192- \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u0192 \u20ac \u2022\u201a\u0192 OE \" \u2026 \u0192 \u2020 \u2022 \" \u2039 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 OE\u2022\u201a\u0192 \u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 -\" \u2026 \u2020 \u2039 \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u201a \u20ac \u2022\u201a\u0192 -\" \u2026 \u201a \u2020 \" \u2039 \" \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u2022 \u20ac \u2022\u201a\u0192 -\" \u2026 \u2022 \u2020 \u2030 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192OE' \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022OE \u20ac \u2022\u201a\u0192 -\" \u2026 OE \u2020 \" \u2020 \u2039 \u2022\u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a \u2022\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022- \u20ac \u2022\u201a\u0192 -\" \u2026 - \u2020 \u2022 \" \u2022 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a \u2022OE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u2022 \u20ac \u2022\u201a\u0192 -\" \u2026 \u2022 \u2020 \" \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a \u2022\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022' \u20ac \u201a\u0192 -\" \" \u2026 ' \u2020 \u2022 \u2039 \" \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a OE' \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u017d \u20ac \u2022\u201a\u0192 -\" \u2026 \u017d \u2020 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022- \u20ac \u2022\u201a\u0192 -\" \u2026 - \u2020 \u2122 \u2122 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192'' \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE\u0192 \u20ac \u2022\u201a\u0192 -\" \u2026 \u0192 \u2020 \"\u2030 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192'\u017d \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE \u20ac \u2022\u201a\u0192 -\" \u2026 \u2020 \u02dc \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6 \u2021\u017e \u2039 \u2021 \u2039 \u02c6 \u201a\u0192 -\u02c6 \u2026 \u02c6 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 225 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \u201a \u2022 -\" \u00a1 \" \u2021 \u2022 \u2022 \u2021 \" \u017e\u20ac \u02c6\" \u20ac \u2021\u20ac \u017e\u20ac \u02c6\" \u20ac \u02c6 \u20ac \u2021\u20ac \u017e\u20ac OE \" \u201a \u2022 -\u0192\" \" \" \u203a\u2022 \u2026 \u02c6 \" \u2021 \u2022 \u2022 \u0161\u201a \u2021 \u2022 \" \" \u20ac OE \u20ac \u20ac \u20ac \u2039 \u0160 \u201a \u2022 \u2022 \u2022-\" \u2021 \u2030 \u2022\u2022 \u201a \u2039 \u2021\u20ac \u02c6\" \u20ac \u2021 OE\u00a4\u2022\"\u20ac OE \" \u201a \u201a \u2022 \u201a-\" \u2030'\" \u2026 \u2021\u203a \u2021\u203a\u00a1\" \u2022 \u2022\u201a \u0192 \u20ac \u017e\u20ac \u20ac OE \u201a \u201a - \u2022 -- \u2021 \u017d \" \u2022\" \u2022 \u2021 \u20ac OE \u20ac \" \u201a \u0161 \u2022 \u0161- \u2026 \u2021 ' \u2022- \u0192 \u017e \u017e \" \u2021 \u20ac \u2039 \u20ac \u2030oe \" \" \u2022 - \" \u2021\u20ac \u0192 \" \u2030 \u203a \" \u017d \" \u2022\u0161 \u201a \u2021 \u0192 \u017d \" ' \u2030 \u20ac \u2022 \" \u2022 -\u2039 \u02c6 \" \u2021 \u2022\u0160 \" \u2021 \u0192 \u017d \" \u017e \u0160 \" \u2022 -\u2030 \" \u2021 \u2122\" \" \u201a \u2021 \u0192 \u02dc\" \u20ac \u02dc\" \u20ac \u0192\"' \u20ac \" \u02c6 \u0161 \" \u2022 \u2022 \u2022- \" \u2021 \u201a \u2022 \" \u2021 \" \u20ac \u02c6 \u02c6 \u2021 \u02c6\" \u20ac \u02c6 \u0192\" \u20ac \u2022\" \u02c6 \u2022\u203a \u02c6 \u2026 \u017d \u20ac \u20ac OE \u20ac \u0192 \u017d ' \u2026 \u2022 \" \u201a \u2022 \u201a-\" \u203a\u00a1 \" \u2021 \u201a \u2022\"\u201a \u2021 \" \u017e\u20ac \u20ac \u2021\u20ac \u017e\u20ac \u02c6 \u20ac \u2021\u20ac \" \u2021\u20ac OE \u20ac \u20ac OE \u20ac \" \u02c6 \u20ac \u2026\" \u20ac \u2030 \u20ac \u0160 OE\u201a \u20ac \u2022\u201a\u0192 -\" \u2026 \u201a \u2020 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192\u017d\u201a\u02c6 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE\u2022 \u20ac \u2022\u201a\u0192 -\" \u2026 \u2022 \u2020 \u2022 \u2022 \u2039 \u2021 \u2021 \u2021 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192OE- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OEOE \u20ac \u2022\u201a\u0192 -\" \u2026 OE \u2020 \" \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE- \u20ac \u2022\u201a\u0192 -\" \u2026 - \u2020 \u2039 \u2022 \u2039 \u2022\u0160 \u2026 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a\u0192-\u2022 \u2021 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE\u2022 \u20ac \u2022\u201a\u0192 -\" \u2026 ' \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a - \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE' \u20ac \u201a\u0192 -\" \u2026 \u017d \u2020 \u2039 \u2022 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 -\u2022\u201a -\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE\u017d \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017d \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 OE- \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u201a \u2020 \u2122 \u2039 \u2039 \u0160 \" \u2039 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017d\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -\u0192 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 \u2022 \u2030 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017d\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 - \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 OE \u2020 \"\u2030 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -\u201a \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 - \u2020 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u0192\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 226 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \" \u2022 \u20ac \u2020\" \" \" \u2022 \"-\u2026 \u201a \u0160 \u2021 \" \u20ac \u02c6 OE \u0161 \" - \u2022 --\u2026 \u2039 \u2021 \" \u02c6 ' \u201a\u2022 \u0160 \u2021 \" \u017d \" \u2022 ' \u20ac \" \u02c6 \u201a \" \u0161 \u2022 \u0161-\u0192 \u2020 ' \u201a\u201a \u0161 \u0192 \" \u0160 \u2022 \u0160-\u2030 \u2026 \u2021 ' \u201a\" \u0192 \u2022\u00a1\u20ac OE \u2030 \" \u2022 -\u2026 \u0192 \u2022 \u0192 \u201a- \u2021 \u0192 \u2021 OE \" \" \u2022 -\u2030 \u2022 \u2039 \u201a\u0161 \u2022 \u20ac \u02c6 OE \" \u2022 -\u2030 \u017d \u201a\u0160 -\u0160 \u2039 \u20ac OE \u20ac \u2022 \" \u2022 - \u2021 \u017d \" \u00a5 \u017d \u203a \u00a6 \" \u201a\u2022 \u2021 \u20ac \u2022\u00a1 \u20ac \u2026\"\u2022 \" \u00a7\" \u20ac \u20ac OE \u20ac \u2030\" \" \u017d OE\u00a3\u0192\u20ac \u20ac \u2039 \u20ac OE \u20ac \u2026\" \u201a \" \u2022 \u2022 \u2022- \u2021 \" \u2021 \u20ac \u02c6 \u20ac \u2021 \" \u2021\u20ac \u201a - \u2022 - \" \u2021 \" \u2022\u2022 \u2021 \u0192 \u017d \" ' \u2030 \u20ac \u20ac \" \u02c6 \u20ac \" \u2022 - \u2022 -\u0192\" \u203a \" \u2030 \u203a \" \u2022 \u2021 \" \u201a\" \u2021 \u0192 \u017d \" ' \u2030 \u20ac \u2030 \u20ac \u20ac \u2039 \u20ac \u2022 \u20ac - -\u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 \u2022 \"\u2030 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u0192\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -OE \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 ' \u2020 \u2022 \u2039 \u2122 \u2021 \u2021 \u2021 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u0192OE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -- \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u017d \u2020 \u2022 ' \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u2022\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -\u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 - \u2020 \u2022 \u2039 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u2022' \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -' \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u0192 \u2020 \u2022 \u2022 \u0178 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017dOE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -\u017d \u20ac \u201a\u0192 \u2022 \" \u2026 \u2020 \u2039 \u2122 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017d- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 -- \u20ac \u2022\u201a\u0192 ' \" \u2026 \u201a \u2020 \" \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192-\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u0192 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u2022\u017d \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022 \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 OE \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u2022- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u201a \u20ac \u2022\u201a\u0192 ' \" \u2026 \u2020 \" \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u2022 \u20ac \u2022\u201a\u0192 ' \" \u2026 \u201a \u2020 \u2022 \u2039 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u0192\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 227 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 ' \u00a5OE \u00a6 - \u2022 -\u2030 \" \u2021 \u2122\" \" \"\u2022 \" \u2021 \u0192 \u02dc\" \u20ac \u02dc\" \u20ac \u0192\"' \u20ac \" \u02c6 -\u2022 \u2022 \u2022- \" \u2021 \"\u201a - \u2021 \" \u20ac \u02c6 \u02c6 \u2021 \u02c6\" \u20ac \u02c6 \u0192\" \u20ac \u2022\" \u02c6 \u2022\u203a \u02c6 \u2026 \u017d \u20ac \u20ac \u0192 \u017d ' \u2026 \u20ac \" \u02c6 \u2022\u0161 -\u201a \u2022 \u201a-OE \u203a\u2022 \" \u2021 OE \u0192 \u02c6 \u0178 \"\" \u2022\u0161 \u2021 \" \u017d \" \u2021 -\" \u2022 \"-\u00a8\u00a9 \u2022 - \u00a9\" \"- \u201a \u2021 \" \u02c6 \u20ac \" \u2021\u20ac \u0161 -- \u2022 --\u2026 \u2039 '\" \"\u0161 \u0192 \u20ac \u20ac \u02c6 \u20ac \u2021 \u017d \u20ac \u20ac OE \u20ac -\u0161 \u2022 \u0161- \u2020\" \" \u017d -\u2026 \" \" \u017d \u2021 \" \" \" \"\u0160 - \u2022 \u2039 \u20ac \u02c6 \" \u2021 \u0161 -\u0160 \u2022 \u0160-\"\u02c6 \u0192 \u2020 ' \u2022 - \u201a \u0192 \u20ac \u20ac \u20ac \" \u00a7\" \u201a - \u2022 - \u2022 \u2021 \u2020 '\"\u02c6 - - \u2039 \u20ac \u02c6 \u20ac \u20ac \u02c6 \u20ac OE \u20ac \" - \u2022 - \u203a\u2022 \u2026 \u017d \" \u0178 \" \" - \u2022 \u0192 \u2022\u00a1\u20ac \u017e \u20ac ' \" - \u2022 -\u2030 \u017d - \u2039 \u20ac OE \u20ac \u0161 \u2022 -\" -\u2022 \u0160\u0161 \u2021 \" \u2021\u20ac \u017d \" \u20ac \u02c6 ' \u2030 \u20ac \u00a2 \u20ac \u20ac \" \u02c6 -\u201a \u2022OE \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 \u2022 \u2020 \u2022 \u2030 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u017d\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022- \u20ac \u2022\u201a\u0192 \u2022 \" \u2026 OE \u2020 \"\u2030 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u2022\u2022 \u0192\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u2022 \u20ac \u2022\u201a\u0192 ' \" \u2026 - \u2020 \u02dc \u2022oe\u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u017d-\u02c6 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022' \u20ac \u2022\u201a\u0192 ' \" \u2026 \u2022 \u2020 \u00a1 \u2039 \u2020 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u0192OE \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022\u017d \u20ac \u2022\u201a\u0192 ' \" \u2026 ' \u2020 \u2022 \u2122 \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 \u2022- \u20ac \u2022\u201a\u0192 ' \" \u2026 \u017d \u2020 ' \u2039 \u2020 \u2022 \u2039 \u2039 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192\u0192\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 '\u0192 \u20ac \u2022\u201a\u0192 ' \" \u2026 - \u2020 \u0160 \u2039 \u2022 ' \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192-\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 ' \u20ac \u201a\u0192 ' \" \u2026 \u0192 \u2020 \u2122 \u2022 \u2039 \u00a2 \u2039 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192- \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 '\u201a \u20ac \u2022\u0178 \u2039 \u2022 \u2039 \u2022 \u2021 \u2021 \u2021 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192-\u201a \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 '\u2022 \u20ac \u2022\u201a\u0192 ' \" \u2026 \u201a \u2020 \" \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 '\u2022\u201a\u0192-\u2022 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 'OE \u20ac \u2022\u201a\u0192 \u017d \" \u2026 \u2020 \"\u2030 \u2022 \u2021\u20ac \u2020\u02c6\u02c6\u2030\u2030\u2030 \u2021 \u2030 \u0160 \u2021 \u2039 \u02c6 \u02c6 \u017d\u2022 \u0192\u0192 \u2021 \u201a\u017d \u2022 \u2021 \u201a\u0192\u201a\u0192 \u2021 228 \u2022 \u017d \u017d \" \u017e \" \u2021 \" \u2021 \u017d \" \u2030 \u0178 ' \" \u00a1 \u0161 \u2022 -\u0192\" \" \u00a7 ' -\u201a \u2021 \" \u02c6 \" \u2021\u20ac \u20ac OE \u20ac \u20ac \u2022\u0160 \u0161 \u2022 -\u2039 \u2021 \u2026 -\" \u2022 \u2021 \" \u02c6 OE \u2022 \u0161 \u2022 \u2022 \u2022-\u2030 \u2039 \u0192\" \u2021 \u2026 \" -- \u2021 \u2022 \u20ac \u2021 \" \u2021\u20ac OE \u2022 \u0161 \u201a \u2022 \u201a-\u2030 \" \u017d ' \u00a9 \u2021 \" \u02c6 \" \" \u2026 \u017d \u017d \u2021 -\u0161 \u02dc\" \u2122",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "14"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Johan Bos, Mona Diab, Suresh Manand- har, Yuval Marton, and Deniz Yuret, editors. 2012. *SEM 2012: The First Joint Conference on Lexi- cal and Computational Semantics -Volume 1: Pro- ceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Interna- tional Workshop on Semantic Evaluation (SemEval 2012). Association for Computational Linguistics, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Proceedings of The 12th International Workshop on Semantic Evaluation. Association for Computational Linguistics, New Orleans",
"authors": [
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1"
]
},
"num": null,
"urls": [],
"raw_text": "Marianna Apidianaki, Saif M. Mohammad, Jonathan May, Ekaterina Shutova, Steven Bethard, and Ma- rine Carpuat, editors. 2018. Proceedings of The 12th International Workshop on Semantic Evaluation. As- sociation for Computational Linguistics, New Or- leans, Louisiana.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the 2019 conference on machine translation (wmt19)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "1--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine trans- lation (wmt19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2"
]
},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Marine Carpuat, Marianna Apidianaki, Saif M. Mohammad, Daniel Cer, and David Jurgens, editors. 2017. Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics, Vancou- ver, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Preslav Nakov, and Torsten Zesch",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1"
]
},
"num": null,
"urls": [],
"raw_text": "Steven Bethard, Marine Carpuat, Daniel Cer, David Jurgens, Preslav Nakov, and Torsten Zesch, editors. 2016. Proceedings of the 10th International Work- shop on Semantic Evaluation (SemEval-2016). As- sociation for Computational Linguistics, San Diego, California.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overcoming barriers to NLP for clinical text: the role of shared tasks and the need for additional creative solutions",
"authors": [
{
"first": "",
"middle": [],
"last": "Wendy W Chapman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Nadkarni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Leonard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "D'avolio",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Guergana",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Savova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of the American Medical Informatics Association",
"volume": "18",
"issue": "5",
"pages": "540--543",
"other_ids": {
"DOI": [
"10.1136/amiajnl-2011-000465"
]
},
"num": null,
"urls": [],
"raw_text": "Wendy W Chapman, Prakash M Nadkarni, Lynette Hirschman, Leonard W D'Avolio, Guergana K Savova, and Ozlem Uzuner. 2011. Overcoming bar- riers to NLP for clinical text: the role of shared tasks and the need for additional creative solutions. Jour- nal of the American Medical Informatics Associa- tion, 18(5):540-543.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Advancing the state of the art in clinical natural language processing through shared tasks. Yearbook of medical informatics",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Filannino",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "27",
"issue": "",
"pages": "184--192",
"other_ids": {
"DOI": [
"10.1055/s-0038-1667079"
]
},
"num": null,
"urls": [],
"raw_text": "Michele Filannino and \u00d6zlem Uzuner. 2018. Advanc- ing the state of the art in clinical natural language processing through shared tasks. Yearbook of medi- cal informatics, 27(1):184-192.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Publish or perish",
"authors": [
{
"first": "A",
"middle": [
"W"
],
"last": "Harzing",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.W. Harzing. 2007. Publish or perish. Available from https://harzing.com/resources/publish-or-perish.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Soft Cardinality in Semantic Text Processing: Experience of the SemEval International Competitions. Polibits",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"A"
],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "63--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, Fabio A. Gonzalez, and Alexander Gelbukh. 2015. Soft Cardinality in Semantic Text Processing: Experience of the SemEval Interna- tional Competitions. Polibits, pages 63 -72.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Proceedings of the Seventh International Workshop on Semantic Evaluation",
"authors": [],
"year": 2013,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar and Deniz Yuret, editors. 2013. Sec- ond Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Eval- uation (SemEval 2013). Association for Computa- tional Linguistics, Atlanta, Georgia, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Proceedings of the 13th International Workshop on Semantic Evaluation. Association for Computational Linguistics",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad, editors. 2019. Proceedings of the 13th International Workshop on Semantic Evaluation. As- sociation for Computational Linguistics, Minneapo- lis, Minnesota, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Developing a successful semeval task in sentiment analysis of twitter and other social media texts",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Resources and Evaluation",
"volume": "50",
"issue": "1",
"pages": "35--65",
"other_ids": {
"DOI": [
"10.1007/s10579-015-9328-1"
]
},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Sara Rosenthal, Svetlana Kiritchenko, Saif M. Mohammad, Zornitsa Kozareva, Alan Ritter, Veselin Stoyanov, and Xiaodan Zhu. 2016. Develop- ing a successful semeval task in sentiment analysis of twitter and other social media texts. Language Resources and Evaluation, 50(1):35-65.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval",
"authors": [],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/S14-2"
]
},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov and Torsten Zesch, editors. 2014. Pro- ceedings of the 8th International Workshop on Se- mantic Evaluation (SemEval 2014). Association for Computational Linguistics, Dublin, Ireland.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proceedings of the 9th International Workshop on Semantic Evaluation (Se-mEval 2015). Association for Computational Linguistics",
"authors": [],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/S15-2"
]
},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Torsten Zesch, Daniel Cer, and David Jurgens, editors. 2015. Proceedings of the 9th In- ternational Workshop on Semantic Evaluation (Se- mEval 2015). Association for Computational Lin- guistics, Denver, Colorado.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rob van der Goot, Hessel Haagsma, Barbara Plank, and Martijn Wieling",
"authors": [
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
},
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "897--904",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00304"
]
},
"num": null,
"urls": [],
"raw_text": "Malvina Nissim, Lasha Abzianidze, Kilian Evang, Rob van der Goot, Hessel Haagsma, Barbara Plank, and Martijn Wieling. 2017. Last words: Sharing is car- ing: The future of shared tasks. Computational Lin- guistics, 43(4):897-904.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ethical considerations in NLP shared tasks",
"authors": [
{
"first": "Carla",
"middle": [],
"last": "Parra Escart\u00edn",
"suffix": ""
},
{
"first": "Wessel",
"middle": [],
"last": "Reijers",
"suffix": ""
},
{
"first": "Teresa",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "Joss",
"middle": [],
"last": "Moorkens",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Chao-Hong",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "66--73",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1608"
]
},
"num": null,
"urls": [],
"raw_text": "Carla Parra Escart\u00edn, Wessel Reijers, Teresa Lynn, Joss Moorkens, Andy Way, and Chao-Hong Liu. 2017. Ethical considerations in NLP shared tasks. In Pro- ceedings of the First ACL Workshop on Ethics in Nat- ural Language Processing, pages 66-73, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A replication study of the top performing systems in semeval twitter sentiment analysis",
"authors": [
{
"first": "Efstratios",
"middle": [],
"last": "Sygkounas",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Rizzo",
"suffix": ""
},
{
"first": "Rapha\u00ebl",
"middle": [],
"last": "Troncy",
"suffix": ""
}
],
"year": 2016,
"venue": "The Semantic Web -ISWC 2016",
"volume": "",
"issue": "",
"pages": "204--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstratios Sygkounas, Giuseppe Rizzo, and Rapha\u00ebl Troncy. 2016. A replication study of the top per- forming systems in semeval twitter sentiment anal- ysis. In The Semantic Web -ISWC 2016, pages 204- 219, Cham. Springer International Publishing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": ": a) # of tasks ; b) # of teams participating in SemEval 2012-2019",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Cumulative number of task citations a) except for citations in SemEval proceedings; b) in SemEval proceedings",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Number of task citations a) published in given year, except for citations in SemEval proceedings; b) from given year in SemEval proceedings",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Languages used in SemEval tasks from 2012 to 2019 Output Types used in SemEval tasks from 2012 to 2019 Figure 6: Evaluation Metrics used in SemEval tasks from 2012 to 2019Figure 7: Models used in SA tasks from 2012 to 2019 at SemEval",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Input Types used in SemEval tasks from 2012 to 2019",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "Heatmap on the Evaluation Metrics and Output Types Figure A.3: Timeline of Input Types (upper row) and Output Types (lower row) in Sentiment Analysis tasks at SemEval 2013",
"type_str": "figure",
"uris": null
}
}
}
}