|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:58:25.829168Z" |
|
}, |
|
"title": "Zero-Shot Information Extraction to Enhance a Knowledge Graph Describing Silk Textiles", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Schleider", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Troncy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "EURECOM", |
|
"location": { |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The knowledge of the European silk textile production is a typical case for which the information collected is heterogeneous, spread across many museums and sparse since rarely complete. Knowledge Graphs for this cultural heritage domain, when being developed with appropriate ontologies and vocabularies, enable to integrate and reconcile this diverse information. However, many of these original museum records still have some metadata gaps. In this paper, we present a zero-shot learning approach that leverages the Concept-Net common sense knowledge graph to predict categorical metadata informing about the silk objects production. We compared the performance of our approach with traditional supervised deep learning-based methods that do require training data. We demonstrate promising and competitive performance for similar datasets and circumstances and the ability to predict sometimes more fine-grained information. Our results can be reproduced using the code and datasets published at https:// github.com/silknow/ZSL-KG-silk.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The knowledge of the European silk textile production is a typical case for which the information collected is heterogeneous, spread across many museums and sparse since rarely complete. Knowledge Graphs for this cultural heritage domain, when being developed with appropriate ontologies and vocabularies, enable to integrate and reconcile this diverse information. However, many of these original museum records still have some metadata gaps. In this paper, we present a zero-shot learning approach that leverages the Concept-Net common sense knowledge graph to predict categorical metadata informing about the silk objects production. We compared the performance of our approach with traditional supervised deep learning-based methods that do require training data. We demonstrate promising and competitive performance for similar datasets and circumstances and the ability to predict sometimes more fine-grained information. Our results can be reproduced using the code and datasets published at https:// github.com/silknow/ZSL-KG-silk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The mentioning of European silk textiles often evokes images of clothes and furniture of the old aristocracies and the lavish lifestyles of kings and queens. Nowadays, the knowledge about the occidental way of producing these expensive items is, however, more and more endangered. Many museums and collections around the globe fortunately still have silk objects, or at least public records with metadata and images illustrating them. Such specific museum data, from many different sources about Cultural Heritage objects that are partly centuries old, have naturally some gaps: sometimes, the production year or place is unknown, but the material and technique used is described; sometimes, a rich textual description is provided with many little details about the object production and what it depicts, but categorical values informing about the exact material or technique used is not provided (Figure 1) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 897, |
|
"end": 907, |
|
"text": "(Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The recent progress in natural language processing and more specifically in information extraction can help to address these problems. In particular, it is common to train text classification models from metadata records in order to predict missing categorical values in other records. However, such models do require a significant amount of annotated data for training, which is expensive to get when the domain is very specific. In this paper, we first briefly introduce the development of a multilingual knowledge graph (KG) about silk textiles production. This knowledge graph, developed with the help of historians and museum experts, uses the CIDOC-CRM ontology and a domain expert designed thesaurus defining the silk textile concepts. Next, we propose a Zero-Shot Learning (ZSL) approach based on the ConceptNet common-sense knowledge graph (Speer et al., 2017) , to predict the missing categorical metadata while avoiding to rely on training data. We compare our approach with supervised approaches and we show competitive results and demonstrate the ability to predict more fine-grained concepts despite the specificity of this domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 849, |
|
"end": 869, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of this paper is structured as follows. We present some relevant related work in Section 2. We summarise the development of the Knowledge Graph and we describe the dataset being used in our experiments in Section 3. We detail our Zero Shot Learning approach as well as two supervised learning baselines in Section 4. We analyze the classification results in Section 5. Finally, we provide conclusions and outline some future work in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Integrating Cultural Heritage data into a knowledge graph can be difficult and time consuming, as data Figure 1 : Examples from three different museums with missing categorical properties: a) no subject depiction for the record 37.80.1 from the Metropolitan Museum of Art; b) no material for the record Cl. XXIV n. 1748 from the Musei di Venezia; c) no technique for the record GMMP-733-002 from the French Mobilier National has to be usually collected from many different sources that generally do not use standard formats. For museum data, there are models and ontologies that make such a process less challenging. We consider the CIDOC Conceptual Reference Model (CRM) to be one of them. CIDOC-CRM is an event-centric ontology through which everything can be represented as an event. A man-made object has, for example, been produced at some point in time. Certain materials might have been used during this production that has taken place at a specific location in a specific century. CIDOC-CRM offers not only classes and properties for such an eventbased representation, but is also easily expandable with classes from other ontologies. Hence, CIDOC-CRM comes also with useful extensions such as CRMSci (Scientific Observation Model) and CR-Mdig (model for provenance metadata). CIDOC-CRM is an official ISO standard since 2006 and this status has been renewed in 2014.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 111, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are more and more successful examples of KG development in the Cultural Heritage realm. For example, ARCO (Carriero et al., 2019) is a knowledge graph about the Italian Cultural Heritage that at least indirectly reuses some CIDOC-CRM classes and properties. The Dutch Rijksmuseum collection is available as linked open data (Dijkshoorn et al., 2018) while DOREMUS is the largest knowledge graph available about about classical musical works and interpretations (Achichi et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 355, |
|
"text": "(Dijkshoorn et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 489, |
|
"text": "(Achichi et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Information extraction from texts generally rely on supervised machine learning and deep learning classification techniques that require labelled training data. While general purpose transformer-based language models are often used, fine-tuning them often require numerous examples (Dodge et al., 2020) . One possible solution to address this lack of labelled data is to use active learning (Yuan et al., 2020) in conjunction with pre-trained models such as BERT (Devlin et al., 2019) that comes with possible bias problems (Papakyriakopoulos et al., 2020) . In Cultural Heritage domains, several works are trying to compensate for the lack of labelled data. One method is to leverage human annotations through crowdsourcing together with extracted visual and textual features and automatic annotation through transfer learning (Shabani et al., 2020) . In general Convolutional Neural Networks (CNN) are often used for image, audio and video data (Clermont et al., 2020) , whereas Recurrent Neural Networks (RNN) is also used for textual Cultural Heritage data (Kambau et al., 2018) . Regular neural network (Belhi et al., 2018) for textual data are also used together with word embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 302, |
|
"text": "(Dodge et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 410, |
|
"text": "(Yuan et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 484, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 556, |
|
"text": "(Papakyriakopoulos et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 850, |
|
"text": "(Shabani et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 947, |
|
"end": 970, |
|
"text": "(Clermont et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1061, |
|
"end": 1082, |
|
"text": "(Kambau et al., 2018)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1108, |
|
"end": 1128, |
|
"text": "(Belhi et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "More recently, zero-shot learning approaches have attracted a lot of attention for their ability to offer text classification without relying on training data. It is a form of automatic classification for which textual documents unseen by the model can be analyzed and classified. Several frameworks have been proposed over the years, based on BERT (Zhang et al., 2019) , (Ye et al., 2020) or other large pre-trained models (Weller et al., 2020) . Our approach relies on ZeSTE 1 (Zero Shot Topic Extraction) (Harrando and Troncy, 2021 ) that provides a framework for extracting topics from textual documents using the ConceptNet commonsense knowledge graph. In addition, the framework provides explainability of its classification results using the ConceptNet KG neighborhood.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 369, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 389, |
|
"text": "(Ye et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 445, |
|
"text": "(Weller et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 534, |
|
"text": "(Harrando and Troncy, 2021", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The development of the knowledge graph has followed an established extraction, transform and load (ETL) pipeline. First, domain experts have made a relevant selection of museum websites. We have developed a crawling software that directly scrapes the public records and images from their websites or relies on open APIs when available, and saved the metadata information into a common intermediate JSON format. Next, domain experts have provided mapping rules by interpreting the original metadata to a unified target ontology model based on CIDOC-CRM. This enables to address a first heterogeneity problem as how similar information are differently expressed across museums. For example, the production place of a silk textile is represented in a field named \"Culture\" in the case of the MET museum and in a field named \"Object Place\" in the case of the Museum of Fine Arts of Boston. Museums come from all over the world and the field names are also multilingual. For example, the production place would be informed in a field named \"Lugar de Producci\u00f3n/Ceca\" in the case of the Red Digital de Colecciones de Museosde Espa\u00f1a. The conversion process also addresses a second heterogeneity problem by disambiguating the original metadata field values to common concepts defined in a multilingual thesaurus about the silk production. Hence, specific weaving techniques or materials used that are mentioned in the metadata get linked to these unique concepts ( Figure 2 ). This disambiguation is performed by string matching and simple heuristics. Other controlled vocabular-ies for normalizing place names (Geonames 2 ) and time periods (Getty AAT) are also used. The resulting knowledge graph can be accessed via SPARQL queries or a dedicated RESTful API.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1458, |
|
"end": 1466, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Knowledge Graph Development", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The multilingual Knowledge Graph of silk textile productions consists of descriptions of 36210 unique objects illustrated by 74527 images in four languages: English, Spanish, French and Italian. While the information integration process has been effective, one general problem of the KG is that many properties have missing values. In this paper, we focus on three important properties describing the silk production namely: the material used, the weaving technique employed and a the subject depicted. Consequently, we extract from the knowledge graph three subsets corresponding to the set of objects having values for those properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and Preprocessing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The silk thesaurus which is being used to normalize the values of those properties contains a very exhaustive inventory of possible materials and techniques, organized in a hierarchy. While some of those materials or techniques are widely used in the data, others are niche and the knowledge graph includes only a very limited number of objects with some of them. One solution is to walk up the thesaurus hierarchy and only consider more general concepts. Ultimately, we need to find a trade off between using fine-grained concepts with the risk of having too sparse data, or too broad concepts with the risk of being non informative. This is a manual process informed by both the thesaurus hierarchy and the available data. Table 1 provides the list of the thesaurus concepts that we finally aim to predict for the three properties (material, technique, depiction) as well as the number of unique objects.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 725, |
|
"end": 732, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset and Preprocessing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As a general preprocessing step, we also removed all records that have multi-valued properties. Some records have, for example, both \"Gold Thread\" and \"Vegetable Fiber\" set as material properties. Including such a record would make the training of a model capable of distinguishing these two concepts harder. We also create language specific subsets. We observe significant differences between the English and Spanish subsets, which highlight the heterogeneous nature of our sources. In particular, subject depiction sticks out as we only have objects from Spanish records having this property informed. The language specific subsets will be used by our Zero Shot Learning approach (Section 4.2) while supervised learning methods will make use of the complete multilingual dataset in the 4 languages (Section 4.1). 4 Approach", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset and Preprocessing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In order to be able to evaluate our approach, we propose to compare it with two supervised classification methods that we use as baselines. For both of them, we will use the three sub-sets described in the Table 1 and perform a 80%-20% split in order to have training data. We will perform a five-fold cross validation for testing the models. Classification based on textual descriptions. The goal of this approach is to predict missing categorical values of museum records based on lengthy textual descriptions such as the note of the S4_Observation in Figure 2 . This approach consists of a Convolutional Neural Network (CNN) built over cross-lingual pre-trained word embeddings which are the aligned fastText vectors trained on Wikipedia (Joulin et al., 2018) . More precisely, a series of convolutional blocks with varying kernel sizes (2,3,4), each consisting of 100 filters, are applied to a sequence of such word-embeddings that got mapped from input description texts from the Knowledge Graph. These filters create an output for which a Gaussian Error Unit (GELU) nonlinearity is used and a max-pooling operation is applied for each block. The idea is to, hopefully, select the best features of each block. Afterwards, they get concatenated into one single vector, regularised by a dropout layer and finally sent to a softmax classification layer to come up with the final predictions per input.", |
|
"cite_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 762, |
|
"text": "(Joulin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 213, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 562, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supervised Approaches", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Classification based on images and metadata. The goal of this approach is also to predict missing categorical values but this time, using the images illustrating the objects with the other metadata values. The underlying assumption is, that it is to some degree visible on images what materials have been used or which technique was employed to produce a silk textile. For this model, a CNN was also used. More precisely, a pre-trained ResNet backbone network served as a generic feature extraction network. The output is then processed by several fully connected network layers that deliver a joint representation of the images and a final classification layer offers afterwards a probabilistic class score per variable and concept. The model is trained based on multi-task learning to perform predictions for the three properties simultaneously: material, technique and depiction. The training is based on stochastic minibatch gradient descent and using focal loss. For ResNet only, the last convolutional layers are fine-tuned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approaches", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The benefit of our approach is to perform a similar prediction without relying on any training data. The underlying assumption is that a textual document about a topic such as \"Embroidery\" will probably mention other words that are similar to this concept such as \"fabric\" or \"stitch\". 3 Our approach relies on the ConceptNet common sense knowledge graph. More precisely, we need to feed our approach with mappings between the targeted values we wish to predict and the ConceptNet network. These mappings have been manually established (Table 3) . ConceptNet is then used to produce a list of candidate words related to the concept of interest, which can be called \"topic neighbourhood\". Each topic neighbourhood is created by querying every node that is N steps away from the label node. In our experiments, we set N=2. A score for each label is computed based on the content of the text document that we give as input. This score is calculated based on cosine similarity via ConceptNet Numberbatch, the graph embeddings of the network. Even if a word has several meanings, only one neighbourhood per spelling is generated. The score is then supposed to represent the relevance of any other term to the main label inside a neighbourhood. Based on these scores, the whole document (museum record) gets also a score and, therefore, a document label too. This is done by quantifying the overlap between the document content as a list of tokens and the label neighbourhood nodes. Finally, as mentioned before, all predicted document labels can be explained by the model through showing the path between nodes or highlighting the words or n-grams that contributed to the final classification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 536, |
|
"end": 545, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Zero-Shot Prediction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This approach has a number of limitations: the concepts that should be predicted must exist in ConceptNet. Furthermore, while ConceptNet is multilingual, the embeddings are language specific. Therefore, our zero shot learning approach will make use of the language specific subsets described in Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we compare the results of our Zero-Shot Learning (ZSL) approach with the supervised methods described in the Section 4.1. We present the results alongside each of the three properties of interest: material, technique and depiction. It is worth to note that the precision, recall, F1 scores are obtained on 20% of the dataset following a 5-fold cross validation while the figures reported for the ZSL method concerns the entire language specific datasets described in Section 3. Table 2 shows the results for predicting material concepts. The two baselines approaches can only predict whether the material used is \"Metal\" or \"Vegetable Fiber\" while our ZSL approach can predict more fine grained concepts than just \"Metal\", such as \"Gold\" or \"Silver\". On the English subset, the ZSL method shows promising results with F1score of 71.6% and 64.4%) respectively. On the Spanish subset, the prediction results are lower for the ZSL method, in comparison to the supervised approach. The topic neighborhood in this language is also less elaborated. The Text CNN method benefits clearly from the multilingual embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 502, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results for the prediction of technique concepts are presented in Table 4 . On the Spanish subset, we again observe lower scores for the ZSL approach. The problem relies on both the quality of the Spanish textual descriptions and on the Spanish concept neighbourhoods in ConceptNet. On the English subset, the ZSL approach performs in par with respect to the Image CNN baseline but less well than the Text CNN one. We observe that \"Tabby\", a very domain-specific concept, is particularly difficult to predict and it was discarded by the Text CNN approach since too little usable textual descriptions were available. The ZSL method performs reasonably well with a similar input and better than the Image CNN baseline. Figure 3 provides the confusion matrix of the predictions made for the technique property with the ZSL method on the English subset. We can see that the true label is usually predicted the most per class, especially in the case of \"embroidery\" and \"velvet\". We observe that \"Brocaded\" is often confused with \"embroidery\" or \"\"velvet\" while \"Damask\" and \"tabby\" are rarely predicted. In the case of \"damask\", this just reflects a low amount of samples, whereas in the case of \"tabby\", the predictions almost do not work at all (F1-score of 2.9% ). Figure 4 shows an example where \"Embroidery\" is correctly predicted, while Figure 5 depicts a counter-example (the correct technique should have been \"Embroidery\"). The graph highlights the most relevant words used for the predictions. The first example is based on the text: \"Spot samplers feature motifs that are scattered in a seemingly ran- dom fashion over the surface of the foundation fabric, usually linen. These samplers are rarely signed or dated, and often include motifs that are only partially worked, leading to the conclusion that this type of sampler was made as a personal stitch reference for its maker, and not for display, as band samplers were signed by student embroiderers. The sampler features flowers, obelisks on pedestals, and an \"S\" motif, in addition to geometric designs that are of the type that would have been used to decorate small purses, cushions, and other accessories.\", taken from a record from the MET Museum. 4 The second text is \"An example of the kind of work [Catherine de Medici] appreciated is the Museum's panel of yellow satin embroidered with silk threads. One of a set of three (the others are in the Mus\u00e9e Historique des Tissus, Lyon), it hung as a valence around the top of a four-poster bed. Various print sources were culled for the airy design of grotesques, while its five vignettes derived from Ovid's Metamorphoses-based on the myths of Europa, Actaeon, Semele, Pyramus, and Salmacisare adapted from woodcut illustrations published by Bernard Salomon in Lyon in 1557. Its brilliant colors, exquisite design, and sumptuous material would have suited the queen's taste perfectly.\", also Table 5 : Results for subject depiction property across approaches Figure 4 : \"Embroidery\" was correctly predicted by our ZSL approach (English, Technique) in this case. Relevant words in the ConceptNet topic neighborhood are highlighted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2218, |
|
"end": 2219, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 77, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 729, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1268, |
|
"end": 1276, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1343, |
|
"end": 1351, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2911, |
|
"end": 2918, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2978, |
|
"end": 2986, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "taken from the MET Museum. 5 The results for the prediction of subject depiction concepts are presented in Table 5 . No results are reported for the Text CNN baseline due to the lack of data. We observe that our ZSL approach performs well for predicting the \"Flower\" concept as does the visual approach. The \"Plant\" and \"Geometry\" concepts are however more complicated to predict by the ZSL method. These concepts are general in ConceptNet and the topic neighborhood too broad for the narrower interpretation expected in the silk domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 28, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 114, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we hypothesize that a common sense knowledge graph such as ConceptNet can feed a Zero Shot Learning method for enriching a domain Figure 5 : \"Velvet\" was predicted instead of \"Embroidery\" by our ZSL approach (English, Technique) in this case. Relevant words in the ConceptNet topic neighborhood are highlighted.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 153, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "specific knowledge graph such as one describing the silk textile production. Through extensive experiments, we have demonstrated promising results for such an approach in its ability to sometimes reliably predict fine-grained concepts without requiring any training data as supervised classification techniques do. Nevertheless, we observe several limitations: the concepts that should be predicted must exist in ConceptNet with an appropriate topic neighborhood. Our results can be reproduced using the code and datasets published at https: //github.com/silknow/ZSL-KG-silk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Even if supervised methods for metadata predictions perform generally better, ZSL remains an interesting method to get accurate predictions even in specific domains that often suffer from data sparsity. We observe that it is also possible to bootstrap the predictions in using first the ZSL method and then applying supervised classification models to further increase performance. We aim to experiment with such hybrid approaches in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/D2KLab/ZeSTE", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.geonames.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See for example the object described at https://ada.silknow.org/object/ c57358a7-c908-3110-b65d-70b09f5f4c4b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://ada.silknow.org/object/ c57358a7-c908-3110-b65d-70b09f5f4c4b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://ada.silknow.org/object/ e2f34144-9ce4-3bc4-b3e0-c67854cd994f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been partially supported by the European Union's Horizon 2020 research and innova-tion program within the SILKNOW (grant agreement No. 769504).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "DORE-MUS: A Graph of Linked Musical Works", |
|
"authors": [ |
|
{ |
|
"first": "Manel", |
|
"middle": [], |
|
"last": "Achichi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pasquale", |
|
"middle": [], |
|
"last": "Lisena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Todorov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Troncy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Delahousse", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Semantic Web Conference (ISWC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manel Achichi, Pasquale Lisena, Konstantin Todorov, Raphael Troncy, and J. Delahousse. 2018. DORE- MUS: A Graph of Linked Musical Works. In Inter- national Semantic Web Conference (ISWC).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Leveraging Known Data for Missing Label Prediction in Cultural Heritage Context", |
|
"authors": [ |
|
{ |
|
"first": "Abdelhak", |
|
"middle": [], |
|
"last": "Belhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdelaziz", |
|
"middle": [], |
|
"last": "Bouras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebti", |
|
"middle": [], |
|
"last": "Foufou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Applied Sciences", |
|
"volume": "8", |
|
"issue": "10", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdelhak Belhi, Abdelaziz Bouras, and Sebti Foufou. 2018. Leveraging Known Data for Missing Label Prediction in Cultural Heritage Context. Applied Sciences, 8(10).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "ArCo: the Italian Cultural Heritage Knowledge Graph", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mancinelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Marinucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [ |
|
"Giovanni" |
|
], |
|
"last": "Nuzzolese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Presutti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chiara", |
|
"middle": [], |
|
"last": "Veninata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Semantic Web Conference (ISWC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Mancinelli, L. Marinucci, Andrea Giovanni Nuzzolese, V. Presutti, and Chiara Veninata. 2019. ArCo: the Italian Cultural Heritage Knowledge Graph. In International Semantic Web Conference (ISWC).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Assessing the Semantic Similarity of Images of Silk Fabrics Using Convolutional Neural Betworks. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Clermont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Dorozynski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Wittich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Rottensteiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "641--648", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Clermont, M. Dorozynski, D. Wittich, and F. Rot- tensteiner. 2020. Assessing the Semantic Similarity of Images of Silk Fabrics Using Convolutional Neu- ral Betworks. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-2-2020:641-648.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The Rijksmuseum collection as Linked Data. Semantic Web", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dijkshoorn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lizzy", |
|
"middle": [], |
|
"last": "Jongma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [], |
|
"last": "Aroyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Ossenbruggen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Schreiber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wesley", |
|
"middle": [], |
|
"last": "Ter Weele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wielemaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "221--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dijkshoorn, Lizzy Jongma, Lora Aroyo, J. V. Os- senbruggen, G. Schreiber, Wesley ter Weele, and J. Wielemaker. 2018. The Rijksmuseum collection as Linked Data. Semantic Web, 9:221-230.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping", |
|
"authors": [ |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Dodge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Ilharco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, A. Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Explainable Zero-Shot Topic Extraction Using a Common-Sense Knowledge Graph", |
|
"authors": [ |
|
{ |
|
"first": "Ismail", |
|
"middle": [], |
|
"last": "Harrando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Troncy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "3 rd Conference on Language, Data and Knowledge (LDK)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ismail Harrando and Raphael Troncy. 2021. Explain- able Zero-Shot Topic Extraction Using a Common- Sense Knowledge Graph. In 3 rd Conference on Language, Data and Knowledge (LDK), Zaragoza, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2979--2984", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2979-2984.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Classification for Multiformat Object of Cultural Heritage using Deep Learning", |
|
"authors": [ |
|
{ |
|
"first": "Andi", |
|
"middle": [], |
|
"last": "Ridwan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kambau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zainal Arifin Hasibuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Octaviano Pratama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "3 rd International Conference on Informatics and Computing (ICIC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ridwan Andi Kambau, Zainal Arifin Hasibuan, and M.Octaviano Pratama. 2018. Classification for Mul- tiformat Object of Cultural Heritage using Deep Learning. In 3 rd International Conference on Infor- matics and Computing (ICIC).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bias in Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Orestis", |
|
"middle": [], |
|
"last": "Papakyriakopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Hegelich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan Carlos Medina", |
|
"middle": [], |
|
"last": "Serrano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabienne", |
|
"middle": [], |
|
"last": "Marco", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Conference on Fairness, Accountability, and Transparency (FAT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "446--457", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orestis Papakyriakopoulos, Simon Hegelich, Juan Car- los Medina Serrano, and Fabienne Marco. 2020. Bias in Word Embeddings. In Conference on Fair- ness, Accountability, and Transparency (FAT), pages 446--457.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Hybrid Human-Machine Classification System for Cultural Heritage Data", |
|
"authors": [ |
|
{ |
|
"first": "Shaban", |
|
"middle": [], |
|
"last": "Shabani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Sokhn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heiko", |
|
"middle": [], |
|
"last": "Schuldt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2 nd Workshop on Structuring and Understanding of Multimedia Her-itAge Contents (SUMAC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaban Shabani, Maria Sokhn, and Heiko Schuldt. 2020. Hybrid Human-Machine Classification Sys- tem for Cultural Heritage Data. In 2 nd Workshop on Structuring and Understanding of Multimedia Her- itAge Contents (SUMAC), pages 49--56.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Chin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Havasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "31 st AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4444--4451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In 31 st AAAI Conference on Artifi- cial Intelligence, pages 4444--4451.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning from Task Descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Orion", |
|
"middle": [], |
|
"last": "Weller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Lourie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Conference on Empirical Methods in Naturual Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1361--1375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orion Weller, Nicholas Lourie, Matt Gardner, and Matthew E. Peters. 2020. Learning from Task De- scriptions. In Conference on Empirical Methods in Naturual Language Processing (EMNLP), pages 1361-1375.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Zero-shot Text Classification via Reinforced Self-training", |
|
"authors": [ |
|
{ |
|
"first": "Zhiquan", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxia", |
|
"middle": [], |
|
"last": "Geng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaoyan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingmin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoxiao", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suhang", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huajun", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "58 th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, Suhang Zheng, F. Wang, J. Zhang, and Huajun Chen. 2020. Zero-shot Text Classification via Reinforced Self-training. In 58 th Annual Meet- ing of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Cold-start active learning through self-supervised language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Michelle", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hsuan-Tien", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michelle Yuan, Hsuan-Tien Lin, and Jordan L. Boyd- Graber. 2020. Cold-start active learning through self-supervised language modeling. In Conference on Empirical Methods in Natural Language Process- ing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Integrating semantic knowledge to tackle zero-shot text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jingqing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyawat", |
|
"middle": [], |
|
"last": "Lertvittayakumjorn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yike", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1031--1040", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019. Integrating semantic knowledge to tackle zero-shot text classification. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1031-1040, Minneapolis, Min- nesota.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "The MET record provided inFigure 1arepresented in the knowledge graph using the CIDOC-CRM ontology and controlled vocabularies", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Confusion matrix for the property technique on the English subset for the ZSL method. The Y-axis represents the true labels and the X-axis the predicted ones.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Property</td><td>KG Concept</td><td>ConceptNet</td></tr><tr><td>Material</td><td>Vegetal Fibre</td><td>/c/es/vegetal</td></tr><tr><td>Material</td><td>Gold Thread</td><td>/c/es/oro, /c/en/gold</td></tr><tr><td>Material</td><td>Silver Thread</td><td>/c/es/plata, /c/en/silver</td></tr><tr><td colspan=\"2\">Technique Damask</td><td>/c/es/damasco, /c/en/damask</td></tr><tr><td colspan=\"2\">Technique Embroidery</td><td>/c/es/bordado, /c/en/embroidery</td></tr><tr><td colspan=\"2\">Technique Velvet</td><td>/c/es/terciopelo, /c/en/velvet</td></tr><tr><td colspan=\"2\">Technique Voided Velvet</td><td>/c/en/velvet</td></tr><tr><td colspan=\"2\">Technique Cut Velvet</td><td>/c/es/terciopelo</td></tr><tr><td colspan=\"2\">Technique Plain Cut Velvet</td><td>/c/es/terciopelo</td></tr><tr><td colspan=\"3\">Technique Fa\u00e7onne Cut Velvet /c/es/terciopelo</td></tr><tr><td colspan=\"2\">Technique Cisel\u00e9 Velvet</td><td>/c/es/terciopelo</td></tr><tr><td colspan=\"2\">Technique Tabby (silk weave)</td><td>/c/en/tabby</td></tr><tr><td colspan=\"2\">Technique Louisine</td><td>/c/es/tafet\u00e1n</td></tr><tr><td colspan=\"2\">Technique Muslin</td><td>/c/es/tafet\u00e1n</td></tr><tr><td colspan=\"2\">Technique Satin (Fabric)</td><td>/c/es/raso, /c/en/satin</td></tr><tr><td colspan=\"2\">Technique Brocaded</td><td>/c/en/brocaded</td></tr><tr><td>Depiction</td><td>Vegetal Motif</td><td>/c/es/planta</td></tr><tr><td>Depiction</td><td>Vine</td><td>/c/es/planta</td></tr><tr><td>Depiction</td><td>Thistle</td><td>/c/es/planta</td></tr><tr><td>Depiction</td><td>Leaf</td><td>/c/es/planta</td></tr><tr><td>Depiction</td><td>Floral Motif</td><td>/c/es/flor</td></tr><tr><td>Depiction</td><td>Fleur-de-lis</td><td>/c/es/flor</td></tr><tr><td>Depiction</td><td>Rose</td><td>/c/es/flor</td></tr><tr><td>Depiction</td><td>Bunch</td><td>/c/es/flor</td></tr><tr><td>Depiction</td><td>Geometrical Motif</td><td>/c/es/geometr\u00eda</td></tr><tr><td>Depiction</td><td>Rhombus</td><td>/c/es/geometr\u00eda</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Results for the material property across approaches" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Mapping between the concepts used in our knowledge graph and ConceptNet" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Approach</td><td colspan=\"2\">Language Class</td><td colspan=\"3\">Precision Recall F1-score Support</td></tr><tr><td>ZSL</td><td>ES</td><td>Flower</td><td>99.3% 72.1%</td><td>83.5%</td><td>1397</td></tr><tr><td>ZSL</td><td>ES</td><td>Plant</td><td>2.2% 26.1%</td><td>4.0%</td><td>23</td></tr><tr><td>ZSL</td><td>ES</td><td>Geometry</td><td>3.1% 100%</td><td>5.9%</td><td>4</td></tr><tr><td colspan=\"2\">CNN -Image ES/FR</td><td>Flower</td><td>89.9% 88.8%</td><td>89.3%</td><td/></tr><tr><td colspan=\"2\">CNN -Image ES/FR</td><td>Plant</td><td>45.1% 38.1%</td><td>41.3%</td><td/></tr><tr><td colspan=\"2\">CNN -Image ES/FR</td><td>Geometry</td><td>35.8% 50.0%</td><td>41.3%</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Results for the technique property across approaches" |
|
} |
|
} |
|
} |
|
} |