Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C14-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:23:26.715203Z"
},
"title": "Time-aware Personalized Hashtag Recommendation on Social Media",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": ""
},
{
"first": "Xuyang",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "Shanghai Key Laboratory of Intelligent Information Processing",
"institution": "Fudan University",
"location": {
"addrLine": "825 Zhangheng Road",
"settlement": "Shanghai",
"country": "P.R.China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of recommending hashtags for microblogs has been received considerable attention in recent years, and many applications can reap enormous benefits from it. Various approaches have been proposed to study the problem from different aspects. However, the impacts of temporal and personal factors have rarely been considered in the existing methods. In this paper, we propose a novel method that extends the translation based model and incorporates the temporal and personal factors. To overcome the limitation of only being able to recommend hashtags that exist in the training data of the existing methods, the proposed method also incorporates extraction strategies into it. The results of experiments on the data collected from real world microblogging services by crawling demonstrate that the proposed method outperforms state-of-the-art methods that do not consider these aspects. The relative improvement of the proposed method over the method without considering these aspects is around 47.8% in F1-score.",
"pdf_parse": {
"paper_id": "C14-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of recommending hashtags for microblogs has been received considerable attention in recent years, and many applications can reap enormous benefits from it. Various approaches have been proposed to study the problem from different aspects. However, the impacts of temporal and personal factors have rarely been considered in the existing methods. In this paper, we propose a novel method that extends the translation based model and incorporates the temporal and personal factors. To overcome the limitation of only being able to recommend hashtags that exist in the training data of the existing methods, the proposed method also incorporates extraction strategies into it. The results of experiments on the data collected from real world microblogging services by crawling demonstrate that the proposed method outperforms state-of-the-art methods that do not consider these aspects. The relative improvement of the proposed method over the method without considering these aspects is around 47.8% in F1-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the past few years, social media services have become one of the most important communication channels for people. According to the statistic reported by the Pew Research Center's Internet & American Life Project in Aug 5, 2013, about 72% of adult internet users are also members of at least one social networking site. Hence, microblogs have also been widely used as data sources for public opinion analyses (Bermingham and Smeaton, 2010; , prediction (Asur and Huberman, 2010; Bollen et al., 2011) , reputation management (Pang and Lee, 2008; Otsuka et al., 2012) , and many other applications (Sakaki et al., 2010; Becker et al., 2010; Guy et al., 2010; Guy et al., 2013) . In addition to the limited number of characters in the content, microblogs also contain a form of metadata tag (hashtag), which is a string of characters preceded by the symbol (#). Hashtags are used to mark the keywords or topics of a microblog. They can occur anywhere in a microblog, at the beginning, middle, or end. Hashtags have been proven to be useful for many applications, including microblog retrieval (Efron, 2010) , query expansion (A. Bandyopadhyay et al., 2011) , sentiment analysis (Davidov et al., 2010; Wang et al., 2011) . However, only a few microblogs contain hashtags provided by their authors. Hence, the task of recommending hashtags for microblogs has become an important research topic and has received considerable attention in recent years.",
"cite_spans": [
{
"start": 414,
"end": 444,
"text": "(Bermingham and Smeaton, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 458,
"end": 483,
"text": "(Asur and Huberman, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 484,
"end": 504,
"text": "Bollen et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 529,
"end": 549,
"text": "(Pang and Lee, 2008;",
"ref_id": "BIBREF25"
},
{
"start": 550,
"end": 570,
"text": "Otsuka et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 601,
"end": 622,
"text": "(Sakaki et al., 2010;",
"ref_id": "BIBREF28"
},
{
"start": 623,
"end": 643,
"text": "Becker et al., 2010;",
"ref_id": "BIBREF2"
},
{
"start": 644,
"end": 661,
"text": "Guy et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 662,
"end": 679,
"text": "Guy et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 1095,
"end": 1108,
"text": "(Efron, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 1131,
"end": 1158,
"text": "Bandyopadhyay et al., 2011)",
"ref_id": null
},
{
"start": 1180,
"end": 1202,
"text": "(Davidov et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 1203,
"end": 1221,
"text": "Wang et al., 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing works have studied discriminative models (Ohkura et al., 2006; Heymann et al., 2008) and generative models (Krestel et al., 2009; Blei and Jordan, 2003; Ding et al., 2013) based on textual information from a single microblog. However, from a dataset containing 282.2 million microblogs crawled from Sina Weibo 1 , we observe that different users may have different perspectives when picking hashtags, and the perspectives of users are impacted by their own interests or the global topic trend. Meanwhile,the global topic distribution is likely to change over time. To better understand how the topics vary over time, we aggregate the microblog posts published in a month as a document. Then, we use a Latent Dirichlet Allocation (LDA) to estimate their topics. Figure 1 illustrates an example, where ten active topics are selected. We can observe that the topics distribution varies greatly over time. 0 200 400 600 800 1000 1200 pay official staff support ministry statistics tomorrow reproduce research financial Figure 1 : An example of the topics of retweets in each month. Each colored stripe represents a topic, whose height is the number of words assigned to the topic. For each topic, the top words of this topic in each month are placed on the stripe.",
"cite_spans": [
{
"start": 50,
"end": 71,
"text": "(Ohkura et al., 2006;",
"ref_id": "BIBREF23"
},
{
"start": 72,
"end": 93,
"text": "Heymann et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 116,
"end": 138,
"text": "(Krestel et al., 2009;",
"ref_id": "BIBREF16"
},
{
"start": 139,
"end": 161,
"text": "Blei and Jordan, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 162,
"end": 180,
"text": "Ding et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 770,
"end": 778,
"text": "Figure 1",
"ref_id": null
},
{
"start": 911,
"end": 1046,
"text": "0 200 400 600 800 1000 1200 pay official staff support ministry statistics tomorrow reproduce research financial",
"ref_id": "TABREF2"
},
{
"start": 1047,
"end": 1055,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by the methods proposed to handle the vocabulary gap problem for keyphrase extraction (Liu et al., 2012) and hashtag suggestion (Ding et al., 2013) , in this work, we also assume that the hashtags and textual content in a microblog are parallel descriptions of the same thing in different languages. To model the document themes, in this paper, we adopt the topical translation model to facilitate the translation process. Topic-specific word triggers are used to bridge the gap between the words and hashtags. Since existing topical translation models can only recommend hashtags learned from the training data, we also incorporate an extraction process into the model. This work makes three main contributions. First, we incorporate temporal and personal factors into considerations. Most of the existing works on hashtag recommendation tasks have focused on textual information. Second, we adopt a topical translation model to combine extraction and translation methods. This makes it possible to suggest hashtags that are not included in the training data. Third, to evaluate the task, we construct a large collection of microblogs from a real microblogging service. All of the microblogs in the collection contain textual content and hashtags labeled by their authors. This can benefit other researchers investigating the same task or other topics using author-centered data.",
"cite_spans": [
{
"start": 96,
"end": 114,
"text": "(Liu et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 138,
"end": 157,
"text": "(Ding et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining part of this paper is structured as follows: We briefly review existing methods in related domains in Section 2. Section 3 gives an overview of the proposed generation model. Section 4 introduces the dataset construction, experimental results and analyses. In Section 5, we will conclude the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the usefulness of tag recommendation, many methods have been proposed from different perspectives (Heymann et al., 2008; Krestel et al., 2009; Rendle et al., 2009; Liu et al., 2012; Ding et al., 2013) . Heymann et al. (Heymann et al., 2008) investigated the tag recommendation problem using the data collected from social bookmarking system. They introduced an entropy-based metric to capture the generality of a particular tag. In (Song et al., 2008) , a Poisson Mixture Model based method is introduced to achieve the tag recommendation task. Krestel et al. (Krestel et al., 2009) introduced a Latent Dirichlet Allocation to elicit a shared topical structure from the collaborative tagging effort of multiple users for recommending tags. Based on the the observation that similar webpages tend to have the same tags, Lu et al. proposed a method taking both tag information and page content into account to achieve the task (Lu et al., 2009) . Ding et al. proposed to use translation process to model this task (Ding et al., 2013) . They extended the translation based method and introduced a topic-specific translation model to process the various meanings of words in different topics. In (Tariq et al., 2013) , discriminative-term-weights were used to establish topic-term relationships, of which users' perception were learned to suggest suitable hashtags for users. To handle the vocabulary problem in keyphrase extraction task, Liu et al. proposed a topical word trigger model, which treated the keyphrase extraction problem as a translation process with latent topics (Liu et al., 2012) .",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(Heymann et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 128,
"end": 149,
"text": "Krestel et al., 2009;",
"ref_id": "BIBREF16"
},
{
"start": 150,
"end": 170,
"text": "Rendle et al., 2009;",
"ref_id": "BIBREF27"
},
{
"start": 171,
"end": 188,
"text": "Liu et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 189,
"end": 207,
"text": "Ding et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 225,
"end": 247,
"text": "(Heymann et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 439,
"end": 458,
"text": "(Song et al., 2008)",
"ref_id": "BIBREF30"
},
{
"start": 552,
"end": 589,
"text": "Krestel et al. (Krestel et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 932,
"end": 949,
"text": "(Lu et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 1019,
"end": 1038,
"text": "(Ding et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 1199,
"end": 1219,
"text": "(Tariq et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 1583,
"end": 1601,
"text": "(Liu et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Most of the works mentioned above are based on textual information. Besides these methods, personalized methods for different recommendation tasks have also been paid lots of attentions (Liang et al., 2007; Shepitsen et al., 2008; Garg and Weber, 2008; Li et al., 2010; Liang et al., 2010; Rendle and Schmidt-Thieme, 2010) . Shepitsen et al. (2008) proposed to use hierarchical agglomerative clustering to take into account personalized navigation context in cluster selection. In (Garg and Weber, 2008) , the problem of personalized, interactive tag recommendation was also studied based on the statics of the tags co-occurrence. Liang et al. (2010) proposed to the multiple relationships among users, items and tags to find the semantic meaning of each tag for each user individually and used this information for personalized item recommendation.",
"cite_spans": [
{
"start": 186,
"end": 206,
"text": "(Liang et al., 2007;",
"ref_id": "BIBREF18"
},
{
"start": 207,
"end": 230,
"text": "Shepitsen et al., 2008;",
"ref_id": "BIBREF29"
},
{
"start": 231,
"end": 252,
"text": "Garg and Weber, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 253,
"end": 269,
"text": "Li et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 270,
"end": 289,
"text": "Liang et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 290,
"end": 322,
"text": "Rendle and Schmidt-Thieme, 2010)",
"ref_id": "BIBREF26"
},
{
"start": 325,
"end": 348,
"text": "Shepitsen et al. (2008)",
"ref_id": "BIBREF29"
},
{
"start": 481,
"end": 503,
"text": "(Garg and Weber, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 631,
"end": 650,
"text": "Liang et al. (2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "From the brief descriptions given above, we can observe that most of the previous works on hashtag suggestion focused on textual information. In this work, we propose to incorporate temporal and personal information into the generative methods. Further more, to over the limitation that translation based method can only recommend hashtags learned from the training data, we also propose to incorporate an extraction process into the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this section, we firstly introduce the notation and generation process of the proposed method. Then, we describe the method used for learning parameters. Finally, we present the methods of how do we apply the learned model to achieve the hashtag recommendation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Methods",
"sec_num": "3"
},
{
"text": "We use D to represent the number of microblogs in the given corpus, and the microblogs have been divided into T epoches. Let t = 1, 2, ..., T be the index of an epoches, \u03b8 t is the topic distribution of the epoch t. Each microblog is generated by a user u i , where u i is an index between 1 and U , and U is the total number of users. A microblog is a sequence of N d words denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "w d = {w d1 , w d2 , ..., w dN d }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "Each microblog contains a set of hashtags denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "h d = {h d1 , h d2 , ..., h dM d }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "A word is defined as an item from a vocabulary with W distinct words indexed by w = {w 1 , w 2 , ..., w W }. Each hashtag is from the vocabulary with V distinct hashtags indexed by h = {h 1 , h 2 , ..., h V }. The notations in this paper are summarized in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "The original LDA assumes that a document is contains a mixture of topics, which is represented by a topic distribution, and each word has a hidden topic label. Although, it is sensible for long document, due to the limitations of the length of characters in a single microblog, it tends to be about a single topic. Hence, we associate a single hidden variable with each microblog to indicate its topic. Similar idea of assigning a single topic to a short sequence of words has also been used for modeling Twitters (Zhao et al., 2011) The hashtag recommendation task is to discover a list of hashtags for each unlabeled microblog, In our method, we first learn a topical translation model, and then we estimate the latent variables for each microblog, finaly recommending hashtags accord to the learned model. Fig. 2 shows the graphical representation of the generation process. The generative story for each microblog is as follows:",
"cite_spans": [
{
"start": 505,
"end": 533,
"text": "Twitters (Zhao et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 809,
"end": 815,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Generation Process",
"sec_num": "3.1"
},
{
"text": "To learn the parameters of our model, we use collapsed Gibbs sampling (Griffiths and Steyvers, 2004) to sample the topics assignment z, latent variables assignment x and y.",
"cite_spans": [
{
"start": 70,
"end": 100,
"text": "(Griffiths and Steyvers, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "Given the current state of all but the variable x d and z d for the dth microblog, we can jointly sample 1. Draw \u03c0 \u223c Beta(\u03b4), \u03b7 \u223c Beta(\u03bb) 2. Draw background word distribution \u03c6 B \u223c Dirichlet(\u03b2 w ) 3. Draw global trendy topic distribution \u03b8 t \u223c Dirichlet(\u03b1) for each time epoch t = 1, 2, ..., T 4. Draw personal topic distribution \u03c8 u \u223c Dirichlet(\u03b1) for each user u = 1, 2, ..., U 5. Draw word distribution \u03c6 z \u223c Dirichlet(\u03b2 w ) for each topic z = 1, 2, ..., K 6. Draw hashtag distribution \u03d5 z,w \u223c Dirichilet(\u03b2 h ) for each topic z = 1, 2, ..., K and each word w = 1, 2, ..., W 7. For each microblog Figure 2 : The graphical representation of the proposed model. Shaded circles are observations or constants. Unshaded ones are hidden variables. ",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 607,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "d = 1, 2, ..., D a. Draw x d \u223c Bernoulli(\u03b7) b. If x d = 0 then Draw a topic z d \u223c M ultinomial(\u03c8 u ) End if If x d = 1 then Draw a topic z d \u223c M ultinomial(\u03b8 t ) End if c. For each word n = 1, ..., N d i. Draw y dn \u223c Bernoulli(\u03c0) ii. If y dn = 0 then Draw a word w dn \u223c M ultinomial(\u03c6 B ) End if If y dn = 1 then Draw a word w dn \u223c M ultinomial(\u03c6 z d ) End if d. For each hashtag m = 1, ..., M d i. Draw h dm \u223c P (h dm |w d , z d , \u03d5 z d ,w d ) w dn z d \u03b8 t \u03c8 u t d u d x d \u03b7 \u03bb \u03b1 \u03b1 h dm y dn \u03c0 \u03b4 \u03c6 z \u03c6 B \u03b2 w \u03b2 w \u03d5 z,w \u03b2 h T M d N d D K U W K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r(x d = p, z d = k|z \u00acd , x \u00acd , y, w, h) \u221d N \u03b7 p + \u03bb N \u03b7 (.) + 2\u03bb \u2022 N l k + \u03b1 N l (.) + K\u03b1 \u2022 N d n=1 N k w dn + \u03b2 w N k (.) + W \u03b2 w \u2022 M d m=1 N d n=1 M w dn ,h dm \u00acd,k + \u03b2 h M w dn ,(.) \u00acd,k + V \u03b2 h ,",
"eq_num": "(1)"
}
],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "where l = u d when p = 0 and l = t d when p = 1. N \u03b7 0 is the number of microblog generated by personal interests, while N \u03b7 1 is the number of microblog coming from global topical trends,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "N \u03b7 (.) = N \u03b7 0 + N \u03b7 1 . N u d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "k is the number of microblogs generated by user u d and under topic k. N u d (.) is the total number of microblogs generated by user",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "u d . N t d k = t d t =1 e \u2212t \u03c1 N t\u2212t k , N t\u2212t k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "is the number of microblogs assigned to topic k at time epoch t \u2212 t , e \u2212t \u03c1 is decay factory, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "N t d (.) = K k=1 N t d k . N k w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "dn is the times of word w dn assigned to topic k, N k (.) is the times of all the word assigned to topic k, M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "w dn ,h dm \u00acd,k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "is the number of occurrences that word w dn is translated to hashtag h dm given topic k. All the counters mentioned above are calculated with the dth microblog excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "We sample y dn for each word w dn in the dth microblog using the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r(y dn = q|z, x, y \u00acdn , w, h) \u221d N \u03c0 q + \u03b4 N \u03c0 (.) + 2\u03b4 \u2022 N l w dn + \u03b2 w N l (.) + W \u03b2 w ,",
"eq_num": "(2)"
}
],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "where l = B when q = 0 and l = z d when q = 1. N \u03c0 0 is the number of words assigned to background words and N \u03c0 1 is the number of words under any topic respectively. N \u03c0 (.) = N \u03c0 0 + N \u03c0 1 , N B w dn is a count of word w dn occurs as a background word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "N z d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "w dn is the number of word w dn is assigned to topic z d , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "N z d (.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "is the total number of words assigned to topic z d . All counters are calculated with taking no account of the current word w dn .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "In many cases, hashtag dose not appear in the training data, to solve this problem, we assume that each word in the microblog can translate to a hashtag in the training data or itself. We assume that each word have aligned \u03c3 (we set \u03c3 = 1 in this paper after trying some number) times with itself under the specific topic. After all the hidden variables become stable, we can estimate the alignment probability as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "\u03d5 h,w,z = \uf8f1 \uf8f2 \uf8f3 N h z,w +\u03b2 h N (.) z,w +\u03c3+(V +1)\u03b2 h if h is a hashtag in the training data \u03c3+\u03b2 h N (.) z,w +\u03c3+(V +1)\u03b2 h if h is the word itself (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "where N h z,w is the number of the hashtag h co-occurs with the word w under topic z in the microblogs. For the probability alignment \u03d5 between hashtag and word, the potential size is W \u2022 V \u2022 K. The data sparsity poses a more serious problem in estimating \u03d5 than the topic-free word alignment case. To remedy the problem, we use interpolation smoothing technique for \u03d5. In this paper, we emplogy smoothing as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03d5 * h,w,z = \u03b3\u03d5 h,w,z + (1 \u2212 \u03b3)P (h|w),",
"eq_num": "(4)"
}
],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "where \u03d5 * h,w,z is the smoothed topical alignment probabilities, \u03d5 h,w,z is the original topical alignment probabilities. P (h|w) is topic-free word alignment probability. Here we obtain P (h|w) by exploring IBM model-1 (Brown et al., 1993) . \u03b3 is trade-off of two probabilities ranging from 0.0 to 1.0. When \u03b3 = 0.0, \u03d5 * h,w,z will be reduce to topic-free word alignment probability; and when \u03b3 = 1.0, there will be no smoothing in \u03d5 * h,w,z . For the word itself there are no smoothing, because it is a pseudo-count.",
"cite_spans": [
{
"start": 220,
"end": 240,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3.2"
},
{
"text": "We perform hashtag extraction as follows. Suppose given an unlabeled dataset, we perform Gibbs Sampling to iteratively estimate the topic and determine topic/background words for each microblog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "The process is the same as described in Section 3.2. After the hidden variables of topic/background words and the topic of each microblog become stable, we can estimate the distribution of topics for the dth microblog in unlabeled data by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "\u03c7 * dk = p(k)p(w d1 |k)...p(w dN d |k) Z where p(w dn |k) = N \u03c0 1 +\u03b4 N \u03c0 (.) +2\u03b4 \u2022 N k w dn +\u03b2 w N k (.) +W \u03b2 w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "and N k w dn is the number of words w dn that are assigned to topic k in the corpus, and p(k) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "N \u03b7 0 +\u03bb N \u03b7 (.) +2\u03bb \u2022 N u k +\u03b1 N u (.) +K\u03b1 + N \u03b7 1 +\u03bb N \u03b7 (.) +2\u03bb \u2022 N t k +\u03b1 N t (.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "+K\u03b1 is regarded as a prior for topic distribution, Z is the normalized factor. With topic distribution \u03c7 * and topical alignment table \u03d5 * , we can rank hashtags for the dth microblog in unlabeled data by computing the scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (h dm |w d , \u03c7 * d , \u03d5 * ) \u221d K z d =1 N d n=1 P (h dm |z d , w dn , \u03d5 * ) \u2022 P (z d |\u03c7 * d ) \u2022 P (w dn |w d ),",
"eq_num": "(5)"
}
],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "where h dm can be a hashtag in the training data or a word in the dth microblog, p(w dn |w d ) is the weight of the word w dn in the microblog, which can be estimated by the IDF score of the word. According to the ranking scores, we can suggest the top-ranked hashtags for each microblog to users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hashtag Extraction",
"sec_num": "3.3"
},
{
"text": "In this section, we introduce the experimental results and the data collection we constructed for training and evaluation. Firstly, we describe how do we construct the collection and statics of it. Then we introduce the experiment configurations and baseline methods. Finally, the evaluation results and analysis are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use a dataset collected from Sina Weibo to evaluate the proposed approach and alternative methods. We random select 166,864 microblogs from Aug. 2012 to June 2013. The unique number of hashtags in the corpus is 17,516. We use the microblogs posted from Aug. 2012 to May 2013 as the training data. The other microblogs are used for evaluation. The hashtags marked in the original microblogs are considered as the golden standards. We use precision (P ), recall (R), and F1-score (F 1 ) to evaluate the performance. Precision is calculated based on the percentage of \"hashtags truly assigned\" among \"hashtags assigned by system\". Recall is calculated based on the \"hashtags truly assigned\" among \"hashtags manually assigned\". F1-score is the harmonic mean of precision and recall. We do 500 iterations of Gibbs sampling to train the model. For optimize the hyperparmeters of the proposed method and alternative methods, we use 5-fold cross-validation in the training data to do it. The number of topics is set to 70. The other settings of hyperparameters are as follows: \u03b1 = 50/K, \u03b2 w = 0.1, \u03b2 h = 0.1, \u03bb = 0.01, and \u03b4 = 0.01. The smoothing factor \u03b3 in Eq. 3is set to 0.6. For estimating the translation probability without topical information, we use GIZA++ 1.07 to do it (Och and Ney, 2003) . For baselines, we compare the proposed model with the following alternative models.",
"cite_spans": [
{
"start": 1274,
"end": 1293,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "\u2022 TWTM: Topical word trigger model (TWTM) was proposed by Liu et al. for keyphrase extraction using only textual information (Liu et al., 2012) . We implemented the model and used it to achieve the task.",
"cite_spans": [
{
"start": 125,
"end": 143,
"text": "(Liu et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "\u2022 TTM: Ding et al. (2013) proposed the topical translation model (TTM) for hash tag extraction. We implemented and extended their method for evaluating it on the corpus constructed in this work. Table 2 shows the comparisons of the proposed method with the state-of-the-art methods on the constructed evaluation dataset. \"TUK-TTM\" denotes the method proposed in this paper. \"T-TTM\" and \"U-TTM\" represent the methods incorporating temporal and personal information respectively. \"K-TTM\" represents the method incorporating the extraction factor. From the results, we can observe that the proposed method is significantly better than other methods at 5% significance level (two-sided).",
"cite_spans": [
{
"start": 7,
"end": 25,
"text": "Ding et al. (2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "4.1"
},
{
"text": "Comparing to results of the TTM, we can observe that the temporal information, personal information and extraction strategy can all benefit the task. Among the three additional factors, the extraction strategy achieves the best result. The limitation of only being able to recommend hashtags that exist in the training data can be overcome in some degree by the proposed method. The relative improvement of proposed TUK-TTM over TTM is around 47.8% in F1-score. Table 3 shows the comparisons of the proposed method with the method \"K-TTM\" in two corpus NE-Corpus and E-Corpus. NE-Corpus include microblogs whose hashtags are not contained in the training data. E-Corpus include the microblogs whose hashtags appear in the training data. We can observe that the proposed method significantly better than \"K-TTM\" in the E-Corpus. Another observation is that the method incorporating the extraction factor achieves better performances on the NE-Corpus than E-Corpus. We think that the reason is that the fewer times hashtag appear, the greater weight it has. Hence, we can extract this kind of hashtags more easier. Figure 3 shows the precision-recall curves of TWTW, TTM, T-TTM, U-TTM, TU-TTM, K-TTM, and TUK-TTM on the evaluation dataset. Each point of a precision-recall curve represents extracting different number of hashtags ranging from 1 to 5 respectively. In the figure, curves which are close to the upper right-hand corner of the graph indicate the better performance. From the results, we can observe that the performance of TUK-TTM is in the upper right-hand corner. It also demonstrates that the proposed method achieves better performances than other methods. From the description of the proposed model, we can know that there are several hyperparameters in the proposed TUK-TTM. To evaluate the impacts of them, we evaluate two crucial ones, the number of topics K and the smoothing factor \u03b3. Table 4 shows the influence of the number of topics. From the table, we can observe that the proposed model obtains the best performance when K is set to 70. And performance decreases with more number of topics. We think that data sparsity may be one of the main reasons. With much more topic number, the data sparsity problem will be more serious when estimating topic-specific translation probability. Table 5 shows the influence of the translation probability smoothing parameter \u03b3. When \u03b3 is set to 0.0, it means that the topical information is omitted. Comparing the results of \u03b3 = 0.0 and other values, we can observe that the topical information can benefit this task. When \u03b3 is set to 1.0, it represents the method without smoothing. The results indicate that it is necessary to address the sparsity problem through smoothing.",
"cite_spans": [],
"ref_spans": [
{
"start": 462,
"end": 469,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1113,
"end": 1121,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1906,
"end": 1913,
"text": "Table 4",
"ref_id": null
},
{
"start": 2310,
"end": 2317,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "In this paper, we propose a novel method which incorporates temporal and personal factors into the topical translation model for hashtag recommendation task. Since existing translation model based methods for this task can only recommend hashtags that exist in the training data of the topical translation model, we also incorporate extraction strategies into the model. To evaluate the proposed method, we also construct a dataset from real world microblogging services. The results of experiments on the dataset demonstrate that the proposed method outperforms state-of-the-art methods that do not consider these aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://www.weibo.com. It is one of the most popular microblog services in China.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by 973 Program (2010CB327900), National Natural Science Foundation of China (61003092,61073069), Shanghai Leading Academic Discipline Project (B114) and \"Chen Guang\" project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation(11CG05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Query expansion for microblog retrieval",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Majumder",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of The Twentieth Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.Bandyopadhyay, M. Mitra, and P. Majumder. 2011. Query expansion for microblog retrieval. In Proceedings of The Twentieth Text REtrieval Conference, TREC 2011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Predicting the future with social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Asur",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Huberman",
"suffix": ""
}
],
"year": 2010,
"venue": "WI-IAT'10",
"volume": "1",
"issue": "",
"pages": "492--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Asur and B.A. Huberman. 2010. Predicting the future with social media. In WI-IAT'10, volume 1, pages 492-499.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning similarity metrics for event identification in social media",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Mor",
"middle": [],
"last": "Naaman",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of WSDM '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Becker, Mor Naaman, and Luis Gravano. 2010. Learning similarity metrics for event identification in social media. In Proceedings of WSDM '10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classifying sentiment in microblogs: is brevity an advantage?",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Bermingham",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of CIKM'10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Bermingham and Alan F. Smeaton. 2010. Classifying sentiment in microblogs: is brevity an advantage? In Proceedings of CIKM'10.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Modeling annotated data",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "127--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.M. Blei and M.I. Jordan. 2003. Modeling annotated data. In Proceedings of SIGIR, pages 127-134.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Twitter mood predicts the stock market",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bollen",
"suffix": ""
},
{
"first": "Huina",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Computational Science",
"volume": "2",
"issue": "1",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1 -8.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "Vincent J Della",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "Stephen A Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert L",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263-311.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Enhanced sentiment learning using twitter hashtags and smileys",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using twitter hashtags and smileys. In Proceedings of COLING '10.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning topical translation model for microblog hashtag suggestion",
"authors": [
{
"first": "Zhuoye",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuoye Ding, Xipeng Qiu, Qi Zhang, and Xuanjing Huang. 2013. Learning topical translation model for microblog hashtag suggestion. In Proceedings of IJCAI 2013.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hashtag retrieval in a microblogging environment",
"authors": [
{
"first": "Miles",
"middle": [],
"last": "Efron",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of SIGIR '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miles Efron. 2010. Hashtag retrieval in a microblogging environment. In Proceedings of SIGIR '10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Personalized, interactive tag recommendation for flickr",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of RecSys '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg and Ingmar Weber. 2008. Personalized, interactive tag recommendation for flickr. In Proceedings of RecSys '08.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Finding scientific topics",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Social media recommendation based on people and tags",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Guy",
"suffix": ""
},
{
"first": "Naama",
"middle": [],
"last": "Zwerdling",
"suffix": ""
},
{
"first": "Inbal",
"middle": [],
"last": "Ronen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Carmel",
"suffix": ""
},
{
"first": "Erel",
"middle": [],
"last": "Uziel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of SIGIR '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Guy, Naama Zwerdling, Inbal Ronen, David Carmel, and Erel Uziel. 2010. Social media recommendation based on people and tags. In Proceedings of SIGIR '10.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mining expertise and interests from social media",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Guy",
"suffix": ""
},
{
"first": "Uri",
"middle": [],
"last": "Avraham",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Carmel",
"suffix": ""
},
{
"first": "Sigalit",
"middle": [],
"last": "Ur",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Inbal",
"middle": [],
"last": "Ronen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of WWW '13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Guy, Uri Avraham, David Carmel, Sigalit Ur, Michal Jacovi, and Inbal Ronen. 2013. Mining expertise and interests from social media. In Proceedings of WWW '13.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Social tag prediction",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Heymann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Hector",
"middle": [],
"last": "Garcia-Molina",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGIR '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Heymann, Daniel Ramage, and Hector Garcia-Molina. 2008. Social tag prediction. In Proceedings of SIGIR '08.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Target-dependent twitter sentiment classification",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL 2011",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of ACL 2011, Portland, Oregon, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Latent dirichlet allocation for tag recommendation",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Fankhauser",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Nejdl",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of RecSys '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Krestel, Peter Fankhauser, and Wolfgang Nejdl. 2009. Latent dirichlet allocation for tag recommendation. In Proceedings of RecSys '09.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A contextual-bandit approach to personalized news article recommendation",
"authors": [
{
"first": "Lihong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "661--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661-670. ACM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Personalized content recommendation and user satisfaction: Theoretical synthesis and empirical findings",
"authors": [
{
"first": "Ting-Peng",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hung-Jen",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yi-Cheng",
"middle": [],
"last": "Ku",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Management Information Systems",
"volume": "23",
"issue": "3",
"pages": "45--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting-Peng Liang, Hung-Jen Lai, and Yi-Cheng Ku. 2007. Personalized content recommendation and user satisfaction: Theoretical synthesis and empirical findings. Journal of Management Information Systems, 23(3):45-70.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Connecting users and items with weighted tags for personalized item recommendations",
"authors": [
{
"first": "Huizhi",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yuefeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Richi",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Xiaohui",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 21st ACM conference on Hypertext and hypermedia",
"volume": "",
"issue": "",
"pages": "51--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huizhi Liang, Yue Xu, Yuefeng Li, Richi Nayak, and Xiaohui Tao. 2010. Connecting users and items with weighted tags for personalized item recommendations. In Proceedings of the 21st ACM conference on Hypertext and hypermedia, pages 51-60. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Topical word trigger model for keyphrase extraction",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Liu, Chen Liang, and Maosong Sun. 2012. Topical word trigger model for keyphrase extraction. In Proceedings of COLING.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A content-based method to enhance tag recommendation",
"authors": [
{
"first": "Yu-Ta",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "-I",
"middle": [],
"last": "Shoou",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jane Yung-Jen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of IJCAI'09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Ta Lu, Shoou-I Yu, Tsung-Chieh Chang, and Jane Yung-jen Hsu. 2009. A content-based method to enhance tag recommendation. In Proceedings of IJCAI'09.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Browsing system for weblog articles based on automated folksonomy. Workshop on the Weblogging Ecosystem Aggregation Analysis and Dynamics at WWW",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Ohkura",
"suffix": ""
},
{
"first": "Yoji",
"middle": [],
"last": "Kiyota",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsutomu Ohkura, Yoji Kiyota, and Hiroshi Nakagawa. 2006. Browsing system for weblog articles based on automated folksonomy. Workshop on the Weblogging Ecosystem Aggregation Analysis and Dynamics at WWW.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluation of the reputation network using realistic distance between facebook data",
"authors": [
{
"first": "Takanobu",
"middle": [],
"last": "Otsuka",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "Takayuki",
"middle": [],
"last": "Ito",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WI-IAT '12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takanobu Otsuka, Takuya Yoshimura, and Takayuki Ito. 2012. Evaluation of the reputation network using realistic distance between facebook data. In Proceedings of WI-IAT '12.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Opinion mining and sentiment analysis",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "Found. Trends Inf. Retr",
"volume": "2",
"issue": "1-2",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Found. Trends Inf. Retr., 2(1-2):1-135, January.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pairwise interaction tensor factorization for personalized tag recommendation",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Rendle",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Schmidt-Thieme",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the third ACM international conference on Web search and data mining",
"volume": "",
"issue": "",
"pages": "81--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Rendle and Lars Schmidt-Thieme. 2010. Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the third ACM international conference on Web search and data mining, pages 81-90. ACM.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning optimal ranking with tensor factorization for tag recommendation",
"authors": [
{
"first": "Steffen",
"middle": [],
"last": "Rendle",
"suffix": ""
},
{
"first": "Leandro",
"middle": [
"Balby"
],
"last": "Marinho",
"suffix": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Nanopoulos",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Schmidt-Thieme",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of KDD '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Rendle, Leandro Balby Marinho, Alexandros Nanopoulos, and Lars Schmidt-Thieme. 2009. Learning optimal ranking with tensor factorization for tag recommendation. In Proceedings of KDD '09.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Earthquake shakes twitter users: real-time event detection by social sensors",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Sakaki",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of WWW '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of WWW '10.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Personalized recommendation in social tagging systems using hierarchical clustering",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Shepitsen",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Gemmell",
"suffix": ""
},
{
"first": "Bamshad",
"middle": [],
"last": "Mobasher",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Burke",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys '08",
"volume": "",
"issue": "",
"pages": "259--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Shepitsen, Jonathan Gemmell, Bamshad Mobasher, and Robin Burke. 2008. Personalized recommendation in social tagging systems using hierarchical clustering. In Proceedings of the 2008 ACM Conference on Recommender Systems, RecSys '08, pages 259-266, New York, NY, USA. ACM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Realtime automatic tag recommendation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ziming",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Huajing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qiankun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wang-Chien",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C. Lee",
"middle": [],
"last": "Giles",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of SIGIR '08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Song, Ziming Zhuang, Huajing Li, Qiankun Zhao, Jia Li, Wang-Chien Lee, and C. Lee Giles. 2008. Real- time automatic tag recommendation. In Proceedings of SIGIR '08.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Exploiting topical perceptions over multi-lingual text for hashtag suggestion on twitter",
"authors": [
{
"first": "Amara",
"middle": [],
"last": "Tariq",
"suffix": ""
},
{
"first": "Asim",
"middle": [],
"last": "Karim",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Foroosh",
"suffix": ""
}
],
"year": 2013,
"venue": "FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amara Tariq, Asim Karim, Fernando Gomez, and Hassan Foroosh. 2013. Exploiting topical perceptions over multi-lingual text for hashtag suggestion on twitter. In FLAIRS Conference.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach",
"authors": [
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CIKM '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaolong Wang, Furu Wei, Xiaohua Liu, Ming Zhou, and Ming Zhang. 2011. Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach. In Proceedings of CIKM '11.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Topical keyphrase extraction from twitter",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Wayne Xin Zhao",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Palakorn",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Achananuparp",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "379--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Xin Zhao, Jing Jiang, Jing He, Yang Song, Palakorn Achananuparp, Ee-Peng Lim, and Xiaoming Li. 2011. Topical keyphrase extraction from twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 379-388. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"text": "1: The notations used in this work. D The number of training data set W The number of unique word in the corpus V The number of unique hashtag in the corpus K The number of topics T The total number of time epoches U The total number of users N d The number of words in the dth microblog M d The number of hashtags in the dth microblog z d The topic of the dth microblog x d The latent variable decided the distribution category of z d y dn The latent variable decided the distribution category of w dn \u03c0 The distribution of latent variable y dn \u03b7 The distribution of latent variable x d \u03c6 z The distribution of topic words \u03c6 B The distribution of background words \u03b8 t The distribution of topics for time epoch t \u03c8 u The distribution of topics for user u t d The time epoch for microblog d u d The user of the microblog d \u03d5 The topic-specific word alignment table between word and hashtag or itself x d and z d , the conditional probability of x d = p,z d = k is calculated as follows:",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Precision-recall curves of different methods on this task.",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Evaluation results of different methods on the evaluation collection.",
"num": null,
"content": "<table><tr><td>Methods</td><td>Precision</td><td>Recall</td><td>F 1</td></tr><tr><td>TWTM</td><td>0.231</td><td>0.202</td><td>0.215</td></tr><tr><td>SVM</td><td>0.418</td><td>0.366</td><td>0.390</td></tr><tr><td>TTM</td><td>0.319</td><td>0.279</td><td>0.297</td></tr><tr><td>T-TTM</td><td>0.338</td><td>0.301</td><td>0.319</td></tr><tr><td>U-TTM</td><td>0.341</td><td>0.307</td><td>0.323</td></tr><tr><td>K-TTM</td><td>0.386</td><td>0.337</td><td>0.360</td></tr><tr><td>TU-TTM</td><td>0.355</td><td>0.310</td><td>0.331</td></tr><tr><td>TUK-TTM</td><td>0.452</td><td>0.415</td><td>0.433</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Evaluation results of two different corpus.",
"num": null,
"content": "<table><tr><td/><td/><td>Corpus</td><td/><td>Methods</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td/><td colspan=\"2\">NE-Corpus</td><td colspan=\"2\">K-TTM TUK-TTM 0.641 0.561 0.598 0.631 0.553 0.589</td></tr><tr><td/><td/><td colspan=\"2\">E-Corpus</td><td colspan=\"2\">K-TTM TUK-TTM 0.288 0.271 0.279 0.172 0.162 0.167</td></tr><tr><td colspan=\"5\">Table 4: The influence of the number of topics</td></tr><tr><td colspan=\"2\">K of TUK-TTM.</td><td/><td/><td/></tr><tr><td colspan=\"2\">K Precision</td><td>Recall</td><td>F 1</td><td/></tr><tr><td>10</td><td>0.410</td><td>0.382</td><td colspan=\"2\">0.396</td></tr><tr><td>30</td><td>0.435</td><td>0.380</td><td colspan=\"2\">0.406</td></tr><tr><td>50</td><td>0.448</td><td>0.413</td><td colspan=\"2\">0.430</td></tr><tr><td>70</td><td>0.452</td><td>0.415</td><td colspan=\"2\">0.433</td></tr><tr><td>100</td><td>0.439</td><td>0.404</td><td colspan=\"2\">0.421</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "The influence of the smoothing parameter \u03b3 of TUK-TTM.",
"num": null,
"content": "<table><tr><td colspan=\"2\">\u03b3 Precision</td><td>Recall</td><td>F 1</td></tr><tr><td>0.0</td><td>0.379</td><td>0.354</td><td>0.366</td></tr><tr><td>0.2</td><td>0.405</td><td>0.372</td><td>0.388</td></tr><tr><td>0.4</td><td>0.433</td><td>0.398</td><td>0.415</td></tr><tr><td>0.6</td><td>0.452</td><td>0.415</td><td>0.433</td></tr><tr><td>0.8</td><td>0.426</td><td>0.386</td><td>0.405</td></tr><tr><td>1.0</td><td>0.423</td><td>0.381</td><td>0.401</td></tr></table>",
"html": null
}
}
}
}