|
{ |
|
"paper_id": "K18-1004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:09:34.056612Z" |
|
}, |
|
"title": "A Trio Neural Model for Dynamic Entity Relatedness Ranking", |
|
"authors": [ |
|
{ |
|
"first": "Tu", |
|
"middle": [ |
|
"Ngoc" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tuan", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Bosch Gmbh", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Nejdl", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Measuring entity relatedness is a fundamental task for many natural language processing and information retrieval applications. Prior work often studies entity relatedness in static settings and an unsupervised manner. However, entities in real-world are often involved in many different relationships, consequently entity-relations are very dynamic over time. In this work, we propose a neural networkbased approach for dynamic entity relatedness, leveraging the collective attention as supervision. Our model is capable of learning rich and different entity representations in a joint framework. Through extensive experiments on large-scale datasets, we demonstrate that our method achieves better results than competitive baselines.", |
|
"pdf_parse": { |
|
"paper_id": "K18-1004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Measuring entity relatedness is a fundamental task for many natural language processing and information retrieval applications. Prior work often studies entity relatedness in static settings and an unsupervised manner. However, entities in real-world are often involved in many different relationships, consequently entity-relations are very dynamic over time. In this work, we propose a neural networkbased approach for dynamic entity relatedness, leveraging the collective attention as supervision. Our model is capable of learning rich and different entity representations in a joint framework. Through extensive experiments on large-scale datasets, we demonstrate that our method achieves better results than competitive baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Measuring semantic relatedness between entities is an inherent component in many text mining applications. In search and recommendation, the ability to suggest most related entities to the entity-bearing query has become a standard feature of popular Web search engines (Blanco et al., 2013) . In natural language processing, entity relatedness is an important factor for various tasks, such as entity linking (Hoffart et al., 2012) or word sense disambiguation (Moro et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 291, |
|
"text": "(Blanco et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 432, |
|
"text": "(Hoffart et al., 2012)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 481, |
|
"text": "(Moro et al., 2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, prior work on semantic relatedness often neglects the time dimension and consider entities and their relationships as static. In practice, many entities are highly ephemeral (Jiang et al., 2016) , and users seeking information related to those entities would like to see fresh information. For example, users looking up the entity Taylor Lautner during 2008-2012 might want to be recommended with entities such as The Twilight Saga, due to Lautner's well-known performance in the film series; however the same query in August 2016 should be served with entities related to his appearances in more recent films such as \"Scream Queens\", \"Run the Tide\". In addition, much of previous work resorts to deriving semantic relatedness from co-occurrence -based computations or heuristic functions without direct optimization to the final goal. We believe that desirable framework should see entity semantic relatedness as not separate but an integral part of the process, for instance in a supervised manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 203, |
|
"text": "(Jiang et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we address the problem of entity relatedness ranking, that is, designing the semantic relatedness models that are optimized for ranking systems such as top-k entity retrieval or recommendation. In this setting, the goal is not to quantify the semantic relatedness between two entities based on their occurrences in the data, but to optimize the partial order of the related entities in the top positions. This problem differs from traditional entity ranking (Kang et al., 2015) in that the entity rankings are driven by user queries and are optimized to their (ad-hoc) information needs, while entity relatedness ranking also aims to uncover the meanings of the the relatedness from the data. In other words, while conventional entity semantic relatedness learns from data (editors or content providers' perspectives), and entity ranking learns from the user's perspective, the entity relatedness ranking takes the tradeoff between these views. Such a hybrid approach can benefit applications such as exploratory entity search (Miliaraki et al., 2015) , where users have a specific goal in mind, but at the same time are opened to other related entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 491, |
|
"text": "(Kang et al., 2015)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1041, |
|
"end": 1065, |
|
"text": "(Miliaraki et al., 2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also tackle the issue of dynamic ranking and design the supervised-learning model that takes into account the temporal contexts of entities, and proposes to leverage collective attention from public sources. As an illustration, when one looks into the Wikipedia page of Taylor Lautner, each navi- gation to other Wikipedia pages indicates the user interest in the corresponding target entity given her initial interest in Lautner. Collectively, the navigation traffic observed over time is a good proxy to the shift of public attention to the entity (Figure 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 553, |
|
"end": 562, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, while previous work mainly focuses on one aspect of the entities such as textual profiles or linking graphs , we propose a trio neural model that learns the low level representations of entities from three different aspects: Content, structures and time aspects. For the time aspect, we propose a convolutional model to embed and attend to local patterns of the past temporal signals in the Euclidean space. Experiments show that our trio model outperforms traditional approaches in ranking correlation and recommendation tasks. Our contributions are summarized as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We present the first study of dynamic entity relatedness ranking using collective attention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We introduce an attention-based convolutional neural networks (CNN) to capture the temporal signals of an entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a joint framework to incorporate multiple views of the entities, both from content provider and from user's perspectives, for entity relatedness ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Related Work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most of existing semantic relatedness measures (e.g. derived from Wikipedia) can be divided into the following two major types: (1) text-based,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Relatedness and Recommendation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "graph-based. For the first, traditional methods mainly focus on a high-dimensional semantic space based on occurrences of words Markovitch (2007, 2009) ) or concepts ( Aggarwal and Buitelaar (2014)). In recent years, embedding methods that learn low-dimensional word representations have been proposed. Hu et al. (2015) leverages entity embedding on knowledge graphs to better learn the distributional semantics. Ni et al. (2016) use an adapted version of Word2Vec, where each entity in a Wikipedia page is considered as a term. For the graph-based approaches, these measures usually take advantage of the hyperlink structure of entity graph (Witten and Milne, 2008; Guo and Barbosa, 2014) . Recent graph embedding techniques (e.g., Deep-Walk (Perozzi et al., 2014) ) have not been directly used for entity relatedness in Wikipedia, yet its performance is studied and shown very competitive in recent related work (Zhao et al., 2015; Ponza et al., 2017) . Entity relatedness is also studied in connection with the entity recommendation task. The Spark (Blanco et al., 2013) system firstly introduced the task for Web search, Yu et al. (2014) ; Zhang et al. (2016a) exploit user click logs and entity pane logs for global and personalized entity recommendation. However, these approaches are optimized to user information needs, and also does not target the global and temporal dimension. Recently, Zhang et al. (2016b) ; Tran et al. (2017) proposed time-aware probabilistic approaches that combine 'static' entity relatedness with temporal factors from different sources. Nguyen et al. (2018) studied the task of time-aware ranking for entity aspects and propose an ensemble model to address the sub-features competing problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 151, |
|
"text": "Markovitch (2007, 2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 319, |
|
"text": "Hu et al. (2015)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 429, |
|
"text": "Ni et al. (2016)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 666, |
|
"text": "(Witten and Milne, 2008;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 689, |
|
"text": "Guo and Barbosa, 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 743, |
|
"end": 765, |
|
"text": "(Perozzi et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 914, |
|
"end": 933, |
|
"text": "(Zhao et al., 2015;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 934, |
|
"end": 953, |
|
"text": "Ponza et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1073, |
|
"text": "(Blanco et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1125, |
|
"end": 1141, |
|
"text": "Yu et al. (2014)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1144, |
|
"end": 1164, |
|
"text": "Zhang et al. (2016a)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1398, |
|
"end": 1418, |
|
"text": "Zhang et al. (2016b)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1421, |
|
"end": 1439, |
|
"text": "Tran et al. (2017)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Relatedness and Recommendation", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Neural Ranking. Deep neural ranking among IR and NLP can be generally divided into two groups: representation-focused and interactionfocused models. The representation-focused approach (Huang et al., 2013) independently learns a representation for each ranking element (e.g., query and document) and then employ a similarity function. On the other hand, the interactionfocused models are designed based on the early interactions between the ranking pairs as the input of network. For instance, Lu and Li (2013) ; Guo et al. (2016) build interactions (i.e., local matching signals) between two pieces of text and trains a feed-forward network for computing the matching score. This enables the model to capture various interactions between ranking elements, while with former, the model has only the chance of isolated observation of input elements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 205, |
|
"text": "(Huang et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 510, |
|
"text": "Lu and Li (2013)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 530, |
|
"text": "Guo et al. (2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Network Models", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Attention networks. In recent years, attentionbased NN architectures, which learn to focus their \"attention\" to specific parts of the input, have shown promising results on various NLP tasks. For most cases, attentions are applied on sequential models to capture global context (Luong et al., 2015) . An attention mechanism often relies on a context vector that facilitates outputting a \"summary\" over all (deterministic soft) or a sample (stochastic hard) of input states. Recent work proposed a CNN with attention-based framework to model local context representations of textual pairs (Yin et al., 2016) , or to combine with LSTM to model time-series data (Ord\u00f3\u00f1ez and Roggen, 2016; Lin et al., 2017) for classification and trend prediction tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 298, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 606, |
|
"text": "(Yin et al., 2016)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Network Models", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We denote as named entities any real-world objects registered in a database. Each entity has a textual document (e.g. content of a home page), and a sequence of references to other entities (e.g., obtained from semantic annotations), called the entity link profile. All link profiles constitute an entity linking graph. In addition, two types of information are included to form the entity collective attention.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminaries", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Temporal signals. Each entity can be associated with a number of properties such as view counts, content edits, etc. Given an entity e and a time point n, given D properties, the temporal signals set, in the form of a (univariate or multivariate) time series X \u2208", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminaries", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "R D\u00d7T consists of T real- valued vector x n\u2212T , \u2022 \u2022 \u2022 , x n\u22121 , where x t \u2208 R D cap-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminaries", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "tures the past signals of e at time point t. Entity Navigation. In many systems, the user navigation between two entities is captured, e.g., search engines can log the total click-through of documents of the target entity presented in search results of a query involving the source entity. Following learning to rank approaches (Kang et al., 2015) , we use this information as the ground truth in our supervised models. Given two entities e 1 , e 2 , the navigation signal from e 1 to e 2 at time point t is denoted by y t {e 1 ,e 2 } .", |
|
"cite_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 347, |
|
"text": "(Kang et al., 2015)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminaries", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In our setting, it is not required to have a predefined, static function quantifying the semantic relatedness between two entities. Instead, it can capture a family of functions F where the prior distribution relies on time parameter. We formalize the concepts below. Dynamic Entity Relatedness between two entities e s , e t , where e s is the source entity and e t is the target entity, in a given time t, is a function (denoted by f t (e s , e t )) with the following properties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 asymmetric: f t (e i , e j ) = f t (e j , e i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 non-negativity: f (e i , e j ) \u2265 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 indiscernibility of identicals: e i = e j \u2192 f (e i , e j ) = 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Dynamic Entity Relatedness Ranking. Given a source entity e s and time point t, rank the candidate entities e t 's by their semantic relatedness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this work we use Wikipedia data as the case study for our entity relatedness ranking problem due to its rich knowledge and dynamic nature. It is worth noting that despite experimenting on Wikipedia, our framework is universal can be applied to other sources of entity with available temporal signals and entity navigation. We use Wikipedia pages to represent entities and page views as the temporal signals (details in section 6.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Their Dynamics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Clickstream. For entity navigation, we use the clickstream dataset generated from the Wikipedia webserver logs from February until September, 2016. These datasets contain an accumulation of transitions between two Wikipedia articles with their respective counts on a monthly basis. We study only actual pages (e.g. excluding disambiguation or redirects). In the following, we provide the first analysis of the clickstream data to gain insights into the temporal dynamics of the entity collective attention in Wikipedia. Figure 2a illustrates the distribution of entities by click frequencies, and the correlation of top popular entities (measured by total navigations) across different months is shown in Figure 2b . In general, we observe that the user navigation activities in the top popular entities are very dynamic, Table 1 , there are 24.31% of entities in top-10,000 most active entities of September 2006 do not appear in the same list the previous month. And 30.61% are new compared with 5 months before. In addition, there are 71% of entities in top-10,000 having navigations to new entities compared to the previous month, with approx. 18 new entities are navigated to, on average. Thus, the datasets are naturally very dynamic and sensitive to change. The substantial amount of missing past click logs on the newly-formed relationships also raises the necessity of an dynamic measuring approach. Figure 3 shows the overall architecture of our framework, which consists of three major components: time-, graphand content-based networks. Each component can be considered as a separate sub-ranking network. Each network accepts a tuple of three elements/representations as an input in a pair-wise fashion, i.e., the source entity e s , the target entity e t with higher rank (denoted as e (+) ) and the one with lower rank (denoted as e (\u2212) ). For the content network, each element is a sequence of terms, coming from entity textual representation. For the graph network, we learn the embed- dings from the entity linking graph. For the time network, we propose a new convolutional model learning from the entity temporal signals. More detailed are described as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 520, |
|
"end": 529, |
|
"text": "Figure 2a", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 714, |
|
"text": "Figure 2b", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 822, |
|
"end": 829, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1409, |
|
"end": 1417, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets and Their Dynamics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The entity relatedness ranking can be handled by a point-wise ranking model that learns to predict relatedness score directly. However, as the navigational frequency distribution is often skewed at top, supervisions guided by long-tail navigations would be prone to errors. Hence instead of learning explicitly a calibrated scoring function, we opt for a pair-wise ranking approach. When applying to ranking top-k entities, this approach has the advantage of correctly predicting partial orders of different relatedness functions f t at any time points regardless of their non-transitivity (Cheng et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 590, |
|
"end": 610, |
|
"text": "(Cheng et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Ranking Model Overview", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This work builds upon the idea of interactionbased deep neural models, i.e. learning soft semantic matches from the source-target entity pairs. Note that, we do not aim for a Siamese architecture (Chopra et al., 2005 ) (i.e., in representation-based models), where the weight parameters are shared across networks. The reason is that, the conventional kind of network produces a symmetric relation, violating the asymmetric property of the relatedness function f t (section 3.2). Concretely, each deep network \u03c8 consists of an input layer z 0 , n \u2212 1 hidden layers and an output layer z n . Each hidden layer z i is a fullyconnected network that computes the transformation: (1) In the next section we describe the input representations z 0 for each network.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 216, |
|
"text": "(Chopra et al., 2005", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Ranking Model Overview", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "z i = \u03c3 (w i \u2022 z i\u22121 + b i ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Ranking Model Overview", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To learn the entity representation from its content, we rely on entity textual document (word-based) as well as its link profile (entity-based) (section 3.1). Since the vocabulary size of entities and words is often very large, conventional one-hot vector representation becomes expensive. Hence, we adopt the word hashing technique from (Huang et al., 2013) , that breaks a term into character trigraphs and thus can dramatically reduce the size of the vector dimensionality. We then rely on embeddings to learn the distributed representations and build up the soft semantic interactions via input concatenation. Let E : V \u2192 R m be the embedding function, V is the vocabulary and m is the embedding size. w : V \u2192 R, is the weighting function that learns the global term importance and a weighted element-wise sum of word embedding vectors -compositionality function \u2295, the word-based representation for entity e is hence \u2295", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 358, |
|
"text": "(Huang et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content-based representation learning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "|e w | i=1 (E(w i ), w(w i )).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content-based representation learning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For entity-based representation, we break down the surface form of a linked entity into bag-of-words and apply analogously. The concatenation of the two representations for the tuple < e s , e (+) , e (\u2212) > is then input to the deep feed-forward network.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content-based representation learning", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To obtain the graph embedding for each entity, we adopt the idea of DeepWalk (Perozzi et al., 2014) , which learns the embedding by predicting the ver-tex sequence generated by random walk. Concretely, given an entity e, we learn to predict the sequence of entity references S e -which can be considered as the graph-wise context in the Skipgram model. We then adopt the matching histogram mapping in (Guo et al., 2016) for the soft interaction of the ranking model. Specifically, denote the bag of entities representation of e s as C e s , and that of e t as C e t ; we discretize the soft matching (calculated by cosine similarity of the embedding vectors) of each entity pair in (C e s , C e t ) into different bins. The logarithmic numbers of the count values of each bin then constitute the interaction vector. This soft-interaction in a way is similar in the idea with the traditional link-based model (Witten and Milne, 2008) , where the relatedness measure is based on the overlapping of incoming links.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 99, |
|
"text": "(Perozzi et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 419, |
|
"text": "(Guo et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 908, |
|
"end": 932, |
|
"text": "(Witten and Milne, 2008)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-based representation", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For learning representation from entity temporal signals, the intuition is to model the low-level temporal correlation between two multivariate time series. Specifically, we learn to embed these time series of equal size T into an Euclidean space, such that similar pairs are close to each other. Our embedding function takes the form of a convolutional neural network (CNN), shown in Figure 4 . The architecture rests on four basic layers: a 1-D convolutional (that restricts the slide only along the time window dimension, following (Zheng et al., 2014) ), a batch-norm, an attention-based and a fully connected layer. Convolution layer: A 1-D convolution operation involves applying a filter w f \u2208 R 1\u00d7w\u00d7D (i.e., a matrix of weight parameters) to each subsequence X i e of window size m to produce a new abstraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 555, |
|
"text": "(Zheng et al., 2014)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 393, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "q i = w f L i t:t+m\u22121,D +b; s i = BN(q i ); h i = ReLU(s i )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where L i t:t+w\u22121,D denotes the concatenation of w vectors in the lookup layer representing the subsequence X i e , b is a bias term. The convolutional layer is followed by a batch normalization (BN) layer (Ioffe and Szegedy, 2015) , to speed up the convergence and help improve generalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 231, |
|
"text": "(Ioffe and Szegedy, 2015)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Attention Mechanism: We apply an attention layer on the convolutional outputs. Conceptually, attention mechanisms allow NN models to focus selectively on only the important fea- tures, based on the attention weights that often derived from the interaction with the target or within the input itself (self-attention) (Vaswani et al., 2017) . We adopt the former approach, with the intuition that the time-spatial patterns should not be treated equally, but the ones near the studied time should gain more focus. To ensure that each feature in F c i that associates with different timestamps are rewarded differently, the attention weights are guided by a time-decay weight function, in a recency-favor fashion. More formally, let A \u2208 R T \u2212w+1\u00d71 be the time context vector and F c i \u2208 R 1\u00d7(T \u2212w+1) the output of convolution for X. Then the k th column of the re-weighted feature map F h i is derived by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 338, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "F h i [:, k] = A[k] \u2022 F c i [:, k], k = 1 \u2022 \u2022 \u2022 T \u2212 w + 1 (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The time context vector a is generated by a decay weight function, since each column k in the vector is associated with a time t k which is T \u2212 k + w time units away from studied time t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Decay weight function: we leverage the Polynomial Curve for the function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "PD(t i ,t) = 1 (t\u2212t i ) \u03b1 +1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": ", whereas \u03b1 defines the decay rate. It is worth noting that when \u03b1 is increased, the attention layer acts just like a pooling one 1 . Stacking up multiple convolutional layers is possible, in this case |A| is the size of the previous layer. The attention layer is only applied to the last convolution layer in our architecture. The output of the attention layer is then passed to a fully-connected layer with non-linear activation to obtain the temporal representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention-based CNN for temporal representation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Finally, we describe the optimization and training procedure of our network. We use a Logarithmic loss that can lead to better probability estimation at the cost of accuracy 2 . Our network minimizes the cross-entropy loss function as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning and Optimization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "L = \u2212 1 N N \u2211 i=1 [P {e s ,e 1 ,e 2 } i log\u0233 i + (1 \u2212 P {e s ,e 1 ,e 2 } i ) log(1 \u2212\u0233 i )] + \u03bb |\u03b8 | 2 2 (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning and Optimization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "where N is the training size,\u0233 is the output of the sigmoid layer on the predicted label. \u03b8 contains all the parameters of the network and \u03bb |\u03b8 | 2 2 is the L2 regularization. P {e s ,e (+) ,e (\u2212) } i is the probability that e (+) is ranked higher than e (\u2212) derived from entity navigation, P {e s ,e (+) ,e (\u2212)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning and Optimization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "} i = y t(i) {e s ,e (+) } /(y t(i) {e s ,e (+) } + y t(i)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning and Optimization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "{e s ,e (\u2212) } ), where t(i) is the observed time point of the training instance i. The network parameters are updated using Adam optimizer (Kingma and Ba, 2014).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning and Optimization", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "To recap from Section 4.1, we use the click stream datasets in 2016. We also use the corresponding Wikipedia article dumps, with over 4 million entities represented by actual pages. Since the length of the content of an Wikipedia article is often long, in this work, we make use of only its abstract section. To obtain temporal signals of the entity, we use page view statistics of Wikipedia articles and aggregate the counts by month. We fetch the data from June, 2014 up until the studied time, which results in the length of 27 months.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments 6.1 Dataset", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Seed entities and related candidates. To extract popular and trending entities, we extract from the clickstream data the top 10,000 entities based on the number of navigations from major search engines (Google and Bing), at the studied time. Getting the subset of related entity candidatesfor efficiency purposes-has been well-addressed in related work (Guo and Barbosa, 2014; Ponza et al., 2017) . In this work, we do not leverage a method and just assume the use of an appropriate one. In the experiment, we resort to choose only candidates which are visited from the seed entities at studied time. We filtered out entity-candidate pairs with too few navigations (less than 10) and considered the top-100 candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 376, |
|
"text": "(Guo and Barbosa, 2014;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 377, |
|
"end": 396, |
|
"text": "Ponza et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments 6.1 Dataset", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we compare our models against the following baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Wikipedia Link-based (WLM): Witten and Milne (2008) proposed a low-cost measure of semantic relatedness based on Wikipedia entity graph, inspired by Normalized Google Distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "DeepWalk (DW): DeepWalk (Perozzi et al., 2014) learned representations of vertices in a graph with a random walk generator and language modeling. We chose not to compare with the matrix factorization approach in (Zhao et al., 2015) , as even though it allows the incorporation of different relation types (i.e., among entity, category and word), the iterative computation cost over large graphs is very expensive. When consider only entity-entity relation, the performance is reported rather similar to DW.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 46, |
|
"text": "(Perozzi et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 231, |
|
"text": "(Zhao et al., 2015)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Entity2Vec Model (E2V): or entity embedding learning using Skip-Gram (Mikolov et al., 2013) model. E2V utilizes textual information to capture latent word relationships. Similar to Zhao et al. (2015) ; Ni et al. (2016) , we use Wikipedia articles as training corpus to learn word vectors and reserved hyperlinks between entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 91, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "Zhao et al. (2015)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 218, |
|
"text": "Ni et al. (2016)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "ParaVecs (PV): Le and Mikolov (2014) ; Dai et al. (2015) learned document/entity vectors via the distributed memory (ParaVecs-DM) and distributed bag of words (ParaVecs-DBOW) models, using hierarchical softmax. We use Wikipedia articles as training corpus to learn entity vectors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 36, |
|
"text": "Le and Mikolov (2014)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "RankSVM: Ceccarelli et al. (2013) learned entity relatedness from a set of 28 handcrafted features, using the traditional learning-to-rank method, RankSVM. We put together additional well-known temporal features (Kanhabua et al., 2014; Zhang et al., 2016b ) (i.e., time series cross correlation, trending level and predicted popularity based on page views) and report the results of the extended feature set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 235, |
|
"text": "(Kanhabua et al., 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 255, |
|
"text": "Zhang et al., 2016b", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For our approach, we tested different combinations of content (denoted as Content Emb ), graph, (Graph Emb ) and time (TS-CNN-Att) networks. We also test the content and graph networks with pretrained entity representations (i.e., ParaVecs-DM and DeepWalk).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models for Comparison", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Evaluation procedures. The time granularity is set to months. The studied time t n of our experiments is September 2016. From the seed queries, we use 80% for training, 10% for development and 10% for testing, as shown in Table 2 . Note that, for the time-aware setting and to avoid leakage and bias as much as possible, the data for training and development (including supervision) are up until time t n \u2212 1. In specific, for content and graph data, only t n \u2212 1 is used.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 229, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Metrics. We use 2 correlation coefficient methods, Pearson and Spearman, which have been used often throughout literature, cf. (Dallmann et al., 2016; Ponza et al., 2017) . The Pearson index focuses on the difference between predicted-vscorrect relatedness scores, while Spearman focuses on the ranking order among entity pairs. Our work studies on the strength of the dynamic relatedness between entities, hence we focus more on Pearson index. However, traditional correlation metrics do not consider the positions in the ranked list (correlations at the top or bottom are treated equally). For this reason, we adjust the metric to consider the rankings at specific top-k positions, which consequently can be used to measure the correlation for only top items in the ranking (based to the ground truth). In addition, we use Normalized Discounted Cumulative Gain (NDCG) measure to evaluate the recommendation tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 150, |
|
"text": "(Dallmann et al., 2016;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 170, |
|
"text": "Ponza et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Implementation details. All neural models are implemented in TensorFlow. Initial learning rate is tuned amongst {1.e-2, 1.e-3, 1.e-4, 1.e-5}. The batch size is tuned amongst {50, 100, 200}. The weight matrices are initialized with samples from the uniform distribution (Glorot and Bengio, 2010) . Models are trained for maximum 25 epochs. The hidden layers for each network are among {2, 3, 4}, while for hidden nodes are {128, 256, 512}. Dropout rate is set from {0.2, 0.3, 0.5}. The pretrained DW is empirically set to 128 dimensions, and 200 for PV. For CNN, the filter number are in {10, 20, 30}, window size in {4, 5, 6}, convolutional layers in {1, 2, 3} and decay rate \u03b1 in {1.0, 1.5,\u2022 \u2022 \u2022 ,7.5}. conv-layers with window size 5 and 4, number of filters of 20 and 25 respectively are used for decay hyperparameter analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 294, |
|
"text": "(Glorot and Bengio, 2010)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "We evaluate our proposed method in two different scenarios: (1) Relatedness ranking and (2) Entity recommendation. The first task evaluates how well we can mimic the ranking via the entity navigation. Here we use the raw number of navigations in Wikipedia clickstream. The second task is formulated as: given an entity, suggest the top-k most related entities to it right now. Since there is no standard ground-truth for this temporal task, we constructed two relevance ground-truths. The first one is the proxy ground-truth, with relevance grade is automatically assigned from the (top-100) most navigated target entities. The graded relevance score is then given as the reversed rank order. For this, all entities in the test set are used. The second one is based on the human judgments with 5-level graded relevance scale, i.e., from 4 -highly relevant to 0 -not (temporally) relevant. Two human experts evaluate on the subset of 20 entities (randomly sampled from the test set), with 600 entity pairs (approx. 30 per seed, using pooling method). The ground-truth size is comparable the widely used ground-truth for static relatedness assessment, KORE (Hoffart et al., 2012) . The Cohen's Kappa agreement is 0.72. Performance of the best-performed models on this dataset is then tested with paired t-test against the WLM baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1155, |
|
"end": 1177, |
|
"text": "(Hoffart et al., 2012)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Tasks", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "We report the performance of the relatedness ranking on the left side of Table 3 , with the Pearson and Spearman metrics. Among existing baselines, we observe that link-based approaches i.e., WLM and DeepWalk perform better than others for top-k correlation. Whereas, temporal models yield substantial improvement overall. Specifically, the TS-CNN-Att performs better than the no-attention model in most cases, improves 11% for Pearson@10, and 3% when considering the total rank. Our trio model performs well overall, gives best results for total rank. The duo models (combine base with either pretrained DW or PV) also deliver improvements over the sole temporal ones. We also observer additional gains while combining of temporal base with pretrained DW and PV altogether.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on Relatedness Ranking", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "Here we report the results on the nDCG metrics. Table 3 (right-side) demonstrates the results for two ground-truth settings (proxy and human). We can observe the good performance of the baselines for this task over conventional temporal models, significantly for proxy setting. It can be explained that, 'static' entity relations are ranked high in the non time-aware baselines, hence are still rewarded when considering a fine-grained grading scale (100 level). The margin becomes smaller when comparing in human setting, with the standard 5-level scale. All the models with pretrained representations perform poorly. It shows that for this task, early interaction-based approach is more suitable than purely based on representation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 55, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on Entity Recommendation", |
|
"sec_num": "6.6" |
|
}, |
|
{ |
|
"text": "We present an anecdotic example of top-selected entities for Kingsman: The Golden Circle in Table 4 . While the content-based model favors old relations like the preceding movies, TS-CNN puts popular actress Halle Berry or the recent released X-men: Apocalypse on top. The latter is not ideal as there is not a solid relationship between the two movies. One implication is that the two entities are ranked high is more because of the popularity of themself than the strength of the relationship toward the source entity. The Trio model addresses the issue by taking other perspectives into account, and also balances out the recency and long-term factors, gives the best ranking performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional Analysis", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "Analysis on decay hyper-parameter. We give a study on the effect of decay parameter on performance. Figure 5a illustrates the results on Pearson all and nDCG@10 for the trio model. It can be seen that while nDCG slightly increases, Pearson score peaks while \u03b1 in the range [1.5, 3.5]. Additionally, we show the convergence analysis on \u03b1 for TS-CNN-Att in Figure 6 . Bigger \u03b1 tends to converge faster, but to a significant higher loss when \u03b1 is over 5.5 (omitted from the Figure) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 109, |
|
"text": "Figure 5a", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 363, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 478, |
|
"text": "Figure)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional Analysis", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "Performances on different entity types. We demonstrate in Figures 5b and 5c the model performances on the person and event types. WLM performs poorer for the latter, that can be interpreted as link-based methods tend to slowly adapt", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 75, |
|
"text": "Figures 5b and 5c", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional Analysis", |
|
"sec_num": "6.7" |
|
}, |
|
{ |
|
"text": "Pearson \u00d7100 \u03c1 \u00d7 100 nDCG (proxy) nDCG (human) @10 @30 @50 all all @3 @10 @20 @3 @10 @20 Table 3 : Performance of different models on task (1) Pearson, Spearman's \u03c1 ranking correlation, and task (2) recommendation (measured by nDCG). Bold and underlined numbers indicate best and secondto-best results. \u2213 shows statistical significant over WLM (p < 0.05). for recent trending entities. The temporal models seem to capture these entites better.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 96, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this work, we presented a trio neural model to solve the dynamic entity relatedness ranking problem. The model jointly learns rich representations of entities from textual content, graph and temporal signals. We also propose an effective CNNbased attentional mechanism for learning the tem- poral representation of an entity. Experiments on ranking correlations and top-k recommendation tasks demonstrate the effectiveness of our approach over existing baselines. For future work, we aim to incorporate more temporal signals, and investigate on different 'trainable' attention mechanisms to go beyond the time-based decay, for instance by incorporating latent topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Note that, for clear visualization, we put flattening before attention layer inFigure 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Other ranking-based loss such as Hinge loss favours over sparsity and accuracy (in the sense of direct punishing misclassification via margins) at the cost of probability estimation. The logistic loss distinguishes better between examples whose supervision scores are close.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgments. This work is funded by the ERC Advanced Grant ALEXANDRIA (grant no. 339233). We thank the reviewers for the suggestions on the content and structure of the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Wikipediabased distributional semantics for entity relatedness", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Aggarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Buitelaar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "AAAI Fall Symposium Series", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Aggarwal and Paul Buitelaar. 2014. Wikipedia- based distributional semantics for entity relatedness. In 2014 AAAI Fall Symposium Series.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Entity recommendations in web search", |
|
"authors": [ |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Blanco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Berkant Barla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Cambazoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Mika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Torzec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ISWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roi Blanco, Berkant Barla Cambazoglu, Peter Mika, and Nicolas Torzec. 2013. Entity recommendations in web search. In ISWC, pages 33-48. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning relatedness measures for entity linking", |
|
"authors": [ |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Ceccarelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudio", |
|
"middle": [], |
|
"last": "Lucchese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvatore", |
|
"middle": [], |
|
"last": "Orlando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaele", |
|
"middle": [], |
|
"last": "Perego", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvatore", |
|
"middle": [], |
|
"last": "Trani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd ACM international conference on Information & Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diego Ceccarelli, Claudio Lucchese, Salvatore Or- lando, Raffaele Perego, and Salvatore Trani. 2013. Learning relatedness measures for entity linking. In Proceedings of the 22nd ACM international con- ference on Information & Knowledge Management, pages 139-148. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Label ranking with partial abstention based on thresholded probabilistic models", |
|
"authors": [ |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eyke", |
|
"middle": [], |
|
"last": "H\u00fcllermeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Waegeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Volkmar", |
|
"middle": [], |
|
"last": "Welker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "2501--2509", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiwei Cheng, Eyke H\u00fcllermeier, Willem Waegeman, and Volkmar Welker. 2012. Label ranking with partial abstention based on thresholded probabilistic models. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural In- formation Processing Systems 25, pages 2501-2509. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning a similarity metric discriminatively, with application to face verification", |
|
"authors": [ |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raia", |
|
"middle": [], |
|
"last": "Hadsell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computer Vision and Pattern Recognition", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "539--546", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539-546. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Document embedding with paragraph vectors", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Olah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1507.07998" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew M Dai, Christopher Olah, and Quoc V Le. 2015. Document embedding with paragraph vec- tors. arXiv preprint arXiv:1507.07998.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Extracting semantics from random walks on wikipedia: Comparing learning and counting methods", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Dallmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Niebler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Lemmerich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Hotho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Dallmann, Thomas Niebler, Florian Lem- merich, and Andreas Hotho. 2016. Extracting se- mantics from random walks on wikipedia: Compar- ing learning and counting methods.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Computing semantic relatedness using wikipediabased explicit semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI'07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1606--1611", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipedia- based explicit semantic analysis. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI'07, pages 1606-1611, San Fran- cisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Wikipedia-based semantic interpretation for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaul", |
|
"middle": [], |
|
"last": "Markovitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "443--498", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2009. Wikipedia-based semantic interpretation for natural language processing. Journal of Artificial Intelli- gence Research, 34:443-498.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Understanding the difficulty of training deep feedforward neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the thirteenth international conference on artificial intelligence and statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neu- ral networks. In Proceedings of the thirteenth in- ternational conference on artificial intelligence and statistics, pages 249-256.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A deep relevance matching model for ad-hoc retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Jiafeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixing", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingyao", |
|
"middle": [], |
|
"last": "Ai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 55-64. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Robust entity linking via random walks", |
|
"authors": [ |
|
{ |
|
"first": "Zhaochen", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denilson", |
|
"middle": [], |
|
"last": "Barbosa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "499--508", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhaochen Guo and Denilson Barbosa. 2014. Robust entity linking via random walks. In Proceedings of the 23rd ACM International Conference on Confer- ence on Information and Knowledge Management, pages 499-508. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Kore: keyphrase overlap relatedness for entity disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Seufert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dat", |
|
"middle": [], |
|
"last": "Ba Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Theobald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 21st ACM international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "545--554", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2012. Kore: keyphrase overlap relatedness for entity dis- ambiguation. In Proceedings of the 21st ACM inter- national conference on Information and knowledge management, pages 545-554. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Entity hierarchy embedding", |
|
"authors": [ |
|
{ |
|
"first": "Zhiting", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Poyao", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingkai", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1292--1300", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiting Hu, Poyao Huang, Yuntian Deng, Yingkai Gao, and Eric Xing. 2015. Entity hierarchy embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), vol- ume 1, pages 1292-1300.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning deep structured semantic models for web search using clickthrough data", |
|
"authors": [ |
|
{ |
|
"first": "Po-Sen", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Acero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Heck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2333--2338", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on informa- tion & knowledge management, pages 2333-2338. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Ioffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "448--456", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Towards time-aware knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Tingsong", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyu", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Sha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baobao", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifang", |
|
"middle": [], |
|
"last": "Sui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1715--1724", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, and Zhifang Sui. 2016. Towards time-aware knowledge graph completion. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 1715-1724.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning to rank related entities in web search", |
|
"authors": [ |
|
{ |
|
"first": "Changsung", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawei", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruiqiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Torzec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianzhang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Neurocomputing", |
|
"volume": "166", |
|
"issue": "", |
|
"pages": "309--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Changsung Kang, Dawei Yin, Ruiqiang Zhang, Nico- las Torzec, Jianzhang He, and Yi Chang. 2015. Learning to rank related entities in web search. Neu- rocomputing, 166:309-318.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "What triggers human remembering of events? a large-scale analysis of catalysts for collective memory in wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Nattiya", |
|
"middle": [], |
|
"last": "Kanhabua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tu", |
|
"middle": [ |
|
"Ngoc" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Nieder\u00e9e", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Digital Libraries (JCDL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "341--350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nattiya Kanhabua, Tu Ngoc Nguyen, and Claudia Nieder\u00e9e. 2014. What triggers human remember- ing of events? a large-scale analysis of catalysts for collective memory in wikipedia. In Digital Li- braries (JCDL), 2014 IEEE/ACM Joint Conference on, pages 341-350. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1188--1196", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Hybrid neural networks for learning the trend in time series", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tian", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Aberer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Lin, Tian Guo, and Karl Aberer. 2017. Hybrid neu- ral networks for learning the trend in time series.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A deep architecture for matching short texts", |
|
"authors": [ |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1367--1375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengdong Lu and Hang Li. 2013. A deep architec- ture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367-1375.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Effective approaches to attentionbased neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.04025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "From selena gomez to marlon brando: Understanding explorative entity search", |
|
"authors": [ |
|
{ |
|
"first": "Iris", |
|
"middle": [], |
|
"last": "Miliaraki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Blanco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mounia", |
|
"middle": [], |
|
"last": "Lalmas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 24th International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "765--775", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iris Miliaraki, Roi Blanco, and Mounia Lalmas. 2015. From selena gomez to marlon brando: Understand- ing explorative entity search. In Proceedings of the 24th International Conference on World Wide Web, pages 765-775. International World Wide Web Con- ferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Entity linking meets word sense disambiguation: a unified approach", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Moro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Raganato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "231--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity linking meets word sense disam- biguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231- 244.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Multiple models for recommending temporal aspects of entities", |
|
"authors": [ |
|
{ |
|
"first": "Nattiya", |
|
"middle": [], |
|
"last": "Tu Ngoc Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Kanhabua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nejdl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The Semantic Web -15th International Conference, ESWC 2018, Heraklion", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "462--480", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-93417-4_30" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tu Ngoc Nguyen, Nattiya Kanhabua, and Wolfgang Nejdl. 2018. Multiple models for recommending temporal aspects of entities. In The Semantic Web -15th International Conference, ESWC 2018, Her- aklion, Crete, Greece, June 3-7, 2018, Proceedings, pages 462-480.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Semantic documents relatedness using concept graph representation", |
|
"authors": [ |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Ni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Qiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yosi", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dafna", |
|
"middle": [], |
|
"last": "Mass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [ |
|
"Jia" |
|
], |
|
"last": "Sheinwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shao Sheng", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, WSDM '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "635--644", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2835776.2835801" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuan Ni, Qiong Kai Xu, Feng Cao, Yosi Mass, Dafna Sheinwald, Hui Jia Zhu, and Shao Sheng Cao. 2016. Semantic documents relatedness using con- cept graph representation. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, WSDM '16, pages 635-644, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Francisco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ord\u00f3\u00f1ez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roggen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Sensors", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francisco Javier Ord\u00f3\u00f1ez and Daniel Roggen. 2016. Deep convolutional and lstm recurrent neural net- works for multimodal wearable activity recognition. Sensors, 16(1):115.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Deepwalk: Online learning of social representations", |
|
"authors": [ |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Perozzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Al-Rfou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Skiena", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "701--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social rep- resentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A two-stage framework for computing entity relatedness in wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Ponza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ferragina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumen", |
|
"middle": [], |
|
"last": "Chakrabarti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1867--1876", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3132847.3132890" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Ponza, Paolo Ferragina, and Soumen Chakrabarti. 2017. A two-stage framework for computing entity relatedness in wikipedia. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17, pages 1867-1876, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Beyond time: Dynamic context-aware entity recommendation", |
|
"authors": [ |
|
{ |
|
"first": "Nam Khanh", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tuan", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Nieder\u00e9e", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "European Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nam Khanh Tran, Tuan Tran, and Claudia Nieder\u00e9e. 2017. Beyond time: Dynamic context-aware entity recommendation. In European Semantic Web Con- ference, pages 353-368. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "An effective, low-cost measure of semantic relatedness obtained from wikipedia links", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David N", |
|
"middle": [], |
|
"last": "Witten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Milne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian H Witten and David N Milne. 2008. An effective, low-cost measure of semantic relatedness obtained from wikipedia links.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Abcnn: Attention-based convolutional neural network for modeling sentence pairs", |
|
"authors": [ |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "259--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convo- lutional neural network for modeling sentence pairs. Transactions of the Association of Computational Linguistics, 4(1):259-272.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "On building entity recommender systems using user click log and freebase knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo-June Paul", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of WSDM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Yu, Hao Ma, Bo-June Paul Hsu, and Jiawei Han. 2014. On building entity recommender systems us- ing user click log and freebase knowledge. In Pro- ceedings of WSDM, pages 263-272. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Collaborative knowledge base embedding for recommender systems", |
|
"authors": [ |
|
{ |
|
"first": "Fuzheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [ |
|
"Jing" |
|
], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Defu", |
|
"middle": [], |
|
"last": "Lian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016a. Collaborative knowledge base embedding for recommender sys- tems. In Proceedings of the 22nd ACM SIGKDD in- ternational conference on knowledge discovery and data mining, pages 353-362. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "A probabilistic model for time-aware entity recommendation", |
|
"authors": [ |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Achim", |
|
"middle": [], |
|
"last": "Rettinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Semantic Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "598--614", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lei Zhang, Achim Rettinger, and Ji Zhang. 2016b. A probabilistic model for time-aware entity recom- mendation. In International Semantic Web Confer- ence, pages 598-614. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Representation learning for measuring entity relatedness with rich information", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Zhao, Zhiyuan Liu, and Maosong Sun. 2015. Rep- resentation learning for measuring entity relatedness with rich information. In Twenty-Fourth Interna- tional Joint Conference on Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Time series classification using multichannels deep convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Web-Age Information Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "298--310", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Zheng, Qi Liu, Enhong Chen, Yong Ge, and J Leon Zhao. 2014. Time series classification using multi- channels deep convolutional neural networks. In International Conference on Web-Age Information Management, pages 298-310. Springer.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The dynamics of collective attention for related entities of Taylor Lautner in 2016." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Click times distribution (b) Correlation of top-k entities (c) Correlation by # of navigations" |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Click (navigation) times distribution and ranking correlation of entities in September 2016." |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The trio neural model for entity ranking." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "where w i and b i are the weight matrix and bias at hidden layer i, \u03c3 is a non-linear function such as the rectified linear unit(ReLU). The final score under the trio setup is summed from multiple networks.\u03c6 (< e s , e (+) , e (\u2212) >) = \u03c6 time + \u03c6 graph + \u03c6 content" |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "The attentional CNN for time series representation." |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Model performances for persontype entities. (c) Model performances for social event-type entities." |
|
}, |
|
"FIGREF7": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Performance results for variation of decay parameter and different entity types. Convergence of decay parameters." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Statistics on the dynamic of clickstream, e s denote source entities, e t related entities.and changes substantially with regard to time.Figure2c visualizes the dynamics of related entities toward different ranking sections (e.g., from rank 0 to rank 20) of different months, in terms of their correlation scores. It can be interpreted that the entities that stay in top-20 most related ones tend to be more correlated than entities in bottom-20 when considering top-100 related entities.", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Statistics of the dataset.", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "Different top-k rankings for entity Kingsman: The Golden Circle. Italic means irrelevance.", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |