Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:41:52.909801Z"
},
"title": "BUT-TYPED: Using domain knowledge for computing typed similarity",
"authors": [
{
"first": "Lubomir",
"middle": [],
"last": "Otrusina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "[email protected]"
},
{
"first": "Pavel",
"middle": [],
"last": "Smrz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper deals with knowledge-based text processing which aims at an intuitive notion of textual similarity. Entities and relations relevant for a particular domain are identified and disambiguated by means of semi-supervised machine learning techniques and resulting annotations are applied for computing typedsimilarity of individual texts.",
"pdf_parse": {
"paper_id": "S13-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper deals with knowledge-based text processing which aims at an intuitive notion of textual similarity. Entities and relations relevant for a particular domain are identified and disambiguated by means of semi-supervised machine learning techniques and resulting annotations are applied for computing typedsimilarity of individual texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The work described in this paper particularly shows effects of the mentioned processes in the context of the *SEM 2013 pilot task on typed-similarity, a part of the Semantic Textual Similarity shared task. The goal is to evaluate the degree of semantic similarity between semi-structured records. As the evaluation dataset has been taken from Europeana -a collection of records on European cultural heritage objects -we focus on computing a semantic distance on field author which has the highest potential to benefit from the domain knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Specific features that are employed in our system BUT-TYPED are briefly introduced together with a discussion on their efficient acquisition. Support Vector Regression is then used to combine the features and to provide a final similarity score. The system ranked third on the attribute author among 15 submitted runs in the typed-similarity task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The goal of the pilot typed-similarity task lied in measuring a degree of semantic similarity between semi-structured records. The data came from the Europeana digital library 1 collecting millions of records on paintings, books, films, and other museum and archival objects that have been digitized throughout Europe. More than 2,000 cultural and scientific institutions across Europe have contributed to Europeana. There are many metadata fields attached to each item in the library, but only fields title, subject, description, creator, date and source were used in the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Having this collection, it is natural to expect that domain knowledge on relevant cultural heritage entities and their inter-relations will help to measure semantic closeness between particular items. When focusing on similarities in a particular field (a semantic type) that clearly covers a domain-specific aspect (such as field author/creator in our case), the significance of the domain knowledge should be the highest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuitively, the semantic similarity among authors of two artworks corresponds to strengths of links that can be identified among the two (groups of) authors. As the gold standard for the task resulted from a Mechanical Turk experiment (Paolacci et al., 2010) , it could be expected that close fields correspond to authors that are well known to represent the same style, worked in the same time or the same art branch (e. g., Gabri\u00ebl Metsu and Johannes Vermeer), come from the same region (often guessed from the names), dealt with related topics (not necessarily in the artwork described by the record in question), etc. In addition to necessary evaluation of the intersection and the union of two author fields (leading naturally to the Jaccard similarity coeffi-cient on normalized name records -see below), it is therefore crucial to integrate means measuring the above-mentioned semantic links between identified authors.",
"cite_spans": [
{
"start": 236,
"end": 259,
"text": "(Paolacci et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, there is a lot of noise in the data used in the task. Since Europeana does not precisely define meaning and purpose of each particular field in the database, many mistakes come directly from the unmanaged importing process realized by participating institutions. Fields often mix content of various semantic nature and, occasionally, they are completely misinterpreted (e. g., field creator stands for the author, but, in many cases, it contains only the institution the data comes from). Moreover, the data in records is rather sparse -many fields are left empty even though the information to be filled in is included in original museum records (e. g., the author of an artwork is known but not entered).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The low quality of underlying data can be also responsible for results reported in related studies. For example, Aletras et al. (2012) evaluate semantic similarity between semi-structured items from Europeana. They use several measures including a simple normalized textual overlap, the extended Lesk measure, the cosine similarity, a Wikipedia-based model and the LDA (Latent Dirichlet Allocation). The study, restricted to fields title, subject and description, shows that the best score is obtained by the normalized overlap applied only to the title field. Any other combination of the fields decreased the performance. Similarly, sophisticated methods did not bring any improvement.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "Aletras et al. (2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The particular gold standard (training/test data) used in the typed-similarity task is also problematic. For example, it provides estimates of location-based similarity even though it makes no sense for particular two records -no field mentions a location and it cannot be inferred from other parts). A throughout analysis of the task data showed that creator is the only field we could reasonably use in our experiments (although many issues discussed in previous paragraphs apply for the field as well). That is why we focus on similarities between author fields in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While a plenty of measures for computing textual similarity have been proposed (Lin, 1998; Landauer et al., 1998; Sahlgren, 2005; Gabrilovich and Markovitch, 2007) and there is an active research in the fields of Textual Entailment (Negri et al., 2012) , Paraphrase Identification (Lintean and Rus, 2010) and, recently, the Semantic Textual Similarity (Agirre et al., 2012) , the semi-structured record similarity is a relatively new area of research. Even though we focus on a particular domain-specific field in this study, our work builds on previous results (Croce et al., 2012; Annesi et al., 2012) to pre-compute semantic closeness of authors based on available biographies and other related texts.",
"cite_spans": [
{
"start": 79,
"end": 90,
"text": "(Lin, 1998;",
"ref_id": "BIBREF6"
},
{
"start": 91,
"end": 113,
"text": "Landauer et al., 1998;",
"ref_id": "BIBREF5"
},
{
"start": 114,
"end": 129,
"text": "Sahlgren, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 130,
"end": 163,
"text": "Gabrilovich and Markovitch, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 232,
"end": 252,
"text": "(Negri et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 281,
"end": 304,
"text": "(Lintean and Rus, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 352,
"end": 373,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 562,
"end": 582,
"text": "(Croce et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 583,
"end": 603,
"text": "Annesi et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: The next section introduces the key domain-knowledge processing step of our system which aims at recognizing and disambiguating entities relevant for the cultural heritage domain. The realized system and its results are described in Section 3. Finally, Section 4 briefly summarizes the achievements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A fundamental step in processing text in particular fields lies in identifying named entities relevant for similarity measuring. There is a need for a named entity recognition tool (NER) which identifies names and classifies referred entities into predefined categories. We take advantage of such a tool developed by our team within the DECIPHER project 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Recognition and Disambiguation",
"sec_num": "2"
},
{
"text": "The DECIPHER NER is able to recognize artists relevant for the cultural heritage domain and, for most of them, to identify the branch of the arts they were primarily focused on (such as painter, sculptors, etc.). It also recognizes names of artworks, genres, art periods and movements and geographical features. In total, there are 1,880,985 recognizable entities from the art domain and more than 3,000,000 place names. Cultural-heritage entities come from various sources; the most productive ones are given in Table 1 . The list of place names is populated from the Geo-Names database 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Entity Recognition and Disambiguation",
"sec_num": "2"
},
{
"text": "The tool takes lists of entities and constructs a finite state automaton to scan and annotate input texts. It is extremely fast (50,000 words per second) and has a relatively small memory footprint (less than 90 MB for all the data).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Recognition and Disambiguation",
"sec_num": "2"
},
{
"text": "Additional information attached to entities is Source # of entities Freebase 4 1,288,192 Getty ULAN 5 528,921 VADS 6 31,587 Arthermitage 7 4,259 Artcyclopedia 8 3,966 The tool is also able to disambiguate entities based on a textual context in which they appeared. Semantic types and simple rules preferring longer matches provide a primary means for this. For example, a text containing Bobigny -Pablo Picasso, refers probably to a station of the Paris Metro and does not necessarily deal with the famous Spanish artist. A higher level of disambiguation takes form of classification engines constructed for every ambiguous name from Wikipedia. A set of most specific terms characterizing each particular entity with a shared name is stored together with an entity identifier and used for disambiguation during the text processing phase. Disambiguation of geographical names is performed in a similar manner.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 169,
"text": "entities Freebase 4 1,288,192 Getty ULAN 5 528,921 VADS 6 31,587 Arthermitage 7 4,259 Artcyclopedia 8",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Entity Recognition and Disambiguation",
"sec_num": "2"
},
{
"text": "To compute semantic similarity of two non-empty author fields, normalized textual content is compared by an exact match first. As there is no unified form defined for author names entered to the field, the next step applies the NER tool discussed in the previous section to the field text and tries to identify all mentioned entities. Table 2 shows examples of texts from author fields and their respective annota-tions (in the typewriter font).",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Dates and places of birth and death as well as few specific keywords are put together and used in the following processing separately. To correctly annotate expressions that most probably refer to names of people not covered by the DECIPHER NER tool, we employ the Stanford NER 9 that is trained to identify names based on typical textual contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "The final similarity score for a pair of author fields is computed by means of the SVR combining specific features characterizing various aspects of the similarity. Simple Jaccard coefficient on recognized person names, normalized word overlap of the remaining text and its edit distance (to deal with typos) are used as basic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Places of births and deaths, author's nationality (e. g., Irish painter) and places of work (active in Spain and France) provide data to estimate locationbased similarity of authors. Coordinates of each location are used to compute an average location for the author field. The distance between the average coordinates is then applied as a feature. Since types of locations (city, state, etc.) are also available, the number of unique location types for each item and the overlap between corresponding sets are also employed as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Explicitly mentioned dates as well as information provided by the DECIPHER NER are compared too. The time-similarity feature takes into account time overlap of the dates and time distance of an earlier and a later event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Other features reflect an overlap between visual art branches represented by artists in question (Photographer, Architect, etc.) , an overlap between their styles, genres and all other information available from external sources. We also employ a matrix of artistic influences that has been derived from a large collection of domain texts by means of relation extraction methods.",
"cite_spans": [
{
"start": 97,
"end": 128,
"text": "(Photographer, Architect, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Finally, general relatedness of artists is precomputed from the above-mentioned collection by means of Random Indexing (RI), Explicit Semantic Analysis (ESA) and Latent Dirichlet Allocation (LDA) methods, stored in sparse matrices and entered as a final set of features to the SVR process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "The system is implemented in Python and takes Eginton, Francis; West, Benjamin <author name=\"Francis Eginton\" url=\"http://www.freebase.com/m/0by1w5n\"> Eginton, Francis</author>; <author name=\"Benjamin West\" url=\"http://www.freebase.com/m/01z6r6\">West, Benjamin</author> Yossef Zaritsky Israeli, born Ukraine, 1891-1985 <author name=\"Joseph Zaritsky\" url=\"http://www.freebase.com/m/0bh71xw\" nationality=\"Israel\" place of birth=\"Ukraine\" date of birth=\"1891\" date of death=\"1985\">Yossef Zaritsky Israeli, born Ukraine, 1891-1985</author> Man Ray (Emmanuel Radnitzky) 1890, Philadelphia -1976, Paris <author name=\"Man Ray\" alternate name=\"Emmanuel Radnitzky\" url=\"http://www.freebase.com/m/0gskj\" date of birth=\"1890\" place of birth=\"Philadelphia\" date of death=\"1976\" place of death=\"Paris\"> Man Ray (Emmanuel Radnitzky) 1890, Philadelphia -1976, Paris</author> advantage of several existing modules such as gensim 10 for RI, ESA and other text-representation methods, numpy 11 for Support Vector Regression (SVR) with RBF kernels, PyVowpal 12 for an efficient implementation of the LDA, and nltk 13 for general text pre-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "The resulting system was trained and tested on the data provided by the task organizers. The train and test sets consisted each of 750 pairs of cultural heritage records from Europeana along with the gold standard for the training set. The BUT-TYPED system reached score 0.7592 in the author field (crossvalidated results, Pearson correlation) on the training set where 80 % were used for training whereas 20 % for testing. The score for the field on the testing set was 0.7468, while the baseline was 0.4278.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description and Results",
"sec_num": "3"
},
{
"text": "Despite issues related to the low quality of the gold standard data, the attention paid to the similarity computation on the chosen field showed to bear fruit. The realized system ranked third among 14 others in the criterion we focused on. Domain knowledge proved to significantly help in measuring semantic closeness between authors and the results correspond to an intuitive understanding of the sim-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4"
},
{
"text": "http://www.europeana.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://decipher-research.eu/ 3 http://www.geonames.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/CRF-NER.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by the EC's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.270001, and by the Centrum excellence IT4Innovations (ED1.1.00/02.0070).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agirre, E., Diab, M., Cer, D., and Gonzalez-Agirre, A. (2012). Semeval-2012 task 6: A pilot on se- mantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics-Volume 1: Proceedings of the main conference and the shared task, and Vol- ume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385- 393. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Computing similarity between items in a digital library of cultural heritage",
"authors": [
{
"first": "N",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Clough",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal on Computing and Cultural Heritage (JOCCH)",
"volume": "5",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aletras, N., Stevenson, M., and Clough, P. (2012). Computing similarity between items in a digital library of cultural heritage. Journal on Computing and Cultural Heritage (JOCCH), 5(4):16.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Space projections as distributional models for semantic composition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Annesi",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Storch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "323--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annesi, P., Storch, V., and Basili, R. (2012). Space projections as distributional models for seman- tic composition. In Computational Linguistics and Intelligent Text Processing, pages 323-335. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unitor: combining semantic text similarity functions through sv regression",
"authors": [
{
"first": "D",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Annesi",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Storch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "597--602",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Croce, D., Annesi, P., Storch, V., and Basili, R. (2012). Unitor: combining semantic text similar- ity functions through sv regression. In Proceed- ings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 597- 602. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "6--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabrilovich, E. and Markovitch, S. (2007). Comput- ing semantic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 6-12.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An introduction to latent semantic analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Laham",
"suffix": ""
}
],
"year": 1998,
"venue": "Discourse processes",
"volume": "25",
"issue": "",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landauer, T., Foltz, P., and Laham, D. (1998). An in- troduction to latent semantic analysis. Discourse processes, 25(2):259-284.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th International Conference on Machine Learning",
"volume": "1",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. (1998). An information-theoretic definition of similarity. In Proceedings of the 15th Inter- national Conference on Machine Learning, vol- ume 1, pages 296-304. Citeseer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Paraphrase identification using weighted dependencies and word semantics",
"authors": [
{
"first": "M",
"middle": [
"C"
],
"last": "Lintean",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Rus",
"suffix": ""
}
],
"year": 2010,
"venue": "Informatica: An International Journal of Computing and Informatics",
"volume": "34",
"issue": "1",
"pages": "19--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lintean, M. C. and Rus, V. (2010). Paraphrase iden- tification using weighted dependencies and word semantics. Informatica: An International Journal of Computing and Informatics, 34(1):19-28.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "semeval-2012 task 8: Cross-lingual textual entailment for content synchronization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Marchetti",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "399--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Negri, M., Marchetti, A., Mehdad, Y., Bentivogli, L., and Giampiccolo, D. (2012). semeval-2012 task 8: Cross-lingual textual entailment for con- tent synchronization. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 399-407. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Running experiments on amazon mechanical turk",
"authors": [
{
"first": "G",
"middle": [],
"last": "Paolacci",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chandler",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ipeirotis",
"suffix": ""
}
],
"year": 2010,
"venue": "Judgment and Decision Making",
"volume": "5",
"issue": "5",
"pages": "411--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paolacci, G., Chandler, J., and Ipeirotis, P. (2010). Running experiments on amazon mechanical turk. Judgment and Decision Making, 5(5):411- 419.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An introduction to random indexing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2005,
"venue": "Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering, TKE 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sahlgren, M. (2005). An introduction to random in- dexing. In Methods and Applications of Seman- tic Indexing Workshop at the 7th International Conference on Terminology and Knowledge En- gineering, TKE 2005. Citeseer.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>: Number of art-related entities from various</td></tr><tr><td>sources</td></tr><tr><td>stored in the automaton too. A normalized form of a</td></tr><tr><td>name and its semantic type is returned for each en-</td></tr><tr><td>tity. Normalized forms enable identifying equivalent</td></tr><tr><td>entities expressed differently in texts, e. g., Gabri\u00ebl</td></tr><tr><td>Metsu refers to the same person as Gabriel Metsu,</td></tr><tr><td>US can stand for the United States (of America), etc.</td></tr><tr><td>Type-specific information is also stored. It includes</td></tr><tr><td>a detailed type (e. g., architect, sculptor, etc.), na-</td></tr><tr><td>tionality, relevant periods or movements, and years</td></tr><tr><td>of birth and death for authors. Types of geographical</td></tr><tr><td>features (city, river), coordinates and the GeoNames</td></tr><tr><td>database identifiers are stored for locations.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Examples of texts in the author field and their annotations",
"html": null
}
}
}
}