|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:53:30.435970Z" |
|
}, |
|
"title": "Ranking Online Reviews Based on Their Helpfulness: An Unsupervised Approach", |
|
"authors": [ |
|
{ |
|
"first": "Alimuddin", |
|
"middle": [], |
|
"last": "Melleng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Queen's University", |
|
"location": { |
|
"settlement": "Belfast", |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anna-Jurek", |
|
"middle": [], |
|
"last": "Loughrey", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Queen's University", |
|
"location": { |
|
"settlement": "Belfast", |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Online reviews are an essential aspect of online shopping for both customers and retailers. However, many reviews found on the Internet lack in quality, informativeness or helpfulness. In many cases, they lead the customers towards positive or negative opinions without providing any concrete details (e.g., very poor product, I would not recommend it). In this work, we propose a novel unsupervised method for quantifying helpfulness leveraging the availability of a corpus of reviews. In particular, our method exploits three characteristics of the reviews, viz., relevance, emotional intensity and specificity, towards quantifying helpfulness. We perform three rankings (one for each feature above), which are then combined to obtain a final helpfulness ranking. For the purpose of empirically evaluating our method, we use review of four product categories from Amazon review 1. The experimental evaluation demonstrates the effectiveness of our method in comparison to a recent and state-of-the-art baseline.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Online reviews are an essential aspect of online shopping for both customers and retailers. However, many reviews found on the Internet lack in quality, informativeness or helpfulness. In many cases, they lead the customers towards positive or negative opinions without providing any concrete details (e.g., very poor product, I would not recommend it). In this work, we propose a novel unsupervised method for quantifying helpfulness leveraging the availability of a corpus of reviews. In particular, our method exploits three characteristics of the reviews, viz., relevance, emotional intensity and specificity, towards quantifying helpfulness. We perform three rankings (one for each feature above), which are then combined to obtain a final helpfulness ranking. For the purpose of empirically evaluating our method, we use review of four product categories from Amazon review 1. The experimental evaluation demonstrates the effectiveness of our method in comparison to a recent and state-of-the-art baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Reviews are an essential aspect of information that allows users to obtain insight into a product of interest before purchasing. Typically, users write their reviews in order to express their satisfaction or dissatisfaction about purchased items or services. Products and sellers with more positive reviews tend to gain more new customers than products or sellers without reviews or with many negative reviews. This is because customers feel more confident buying products that have been recommended by other buyers. Popular products could have hundreds or thousands of reviews, which makes it impossible for the customers to read all of them. Moreover, it is not easy for a user to prioritize reading the most informative reviews since there are 1 http://jmcauley.ucsd.edu/data/amazon/ often no such ranking options. Some websites rank reviews based on the posting date or rating star, for example, Trustpilot.com and Reviews.io. Amazon uses a crowdsourcing mechanism, a voting system, to gather feedback on review helpfulness, and then rank them based on the overall votes they received (Amazon.com) . A user can vote for a review as being helpful or unhelpful. Amazon was estimated to receive a revenue of about $2.7 billion by providing simple question \"was this review helpful to you?\" (Spool, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1089, |
|
"end": 1101, |
|
"text": "(Amazon.com)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1291, |
|
"end": 1304, |
|
"text": "(Spool, 2009)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although such a voting system is helpful for customers, it has several limitations due to the inherent character of the voting process. There are number of reasons: 1) not all reviews get the helpfulness vote; 2) the helpfulness voting does not work for cold star review (i.e., a new user or a new review will have much less votes) (Singh et al., 2017) ; 3) reviews receiving helpfulness votes would tend to gather more vote due to the snowball effect (e.g., phenomena such as social proof (Cialdini, 1987) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 352, |
|
"text": "(Singh et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 506, |
|
"text": "(Cialdini, 1987)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we hypothesise that helpfulness of a review should be assessed based on three characteristics, namely relevance (whether the review discusses the key features relevant to a specific product), emotional intensity (level of emotions expressed within a review) and specificity (level of details discussed in a review). We motivate the importance of each of those features later in the paper. We then propose an unsupervised helpfulness ranking method that does not depend on the helpfulness votes and only takes under consideration the content of the review and the star rating. We demonstrate that our proposed method outperforms the state-of-the-art review ranking techniques, through an extensive empirical evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organised as follows. In the next section we present an overview of the work that has been carried out in this space. Following this, we provide the motivation and technical details of the proposed method. Finally, the results of the experimental evaluation are demonstrated followed by the discussion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several approaches to automatically determining the helpfulness of online reviews have been explored in the past. In majority of the existing work, supervised machine learning models have been employed considering the problem as a predictive task (i.e. predict whether/how useful a review is) (Martin and Pu, 2014; Krishnamoorthy, 2015; Malik and Hussain, 2017; Singh et al., 2017; Wu et al., 2017; Enamul Haque et al., 2018; Alsmadi et al., 2020) . With supervised approaches, various types of features such as linguistic features (Krishnamoorthy, 2015; Malik and Hussain, 2017; Wu et al., 2017) or textual features (i.e. polarity, subjectivity, entropy and readability) (Singh et al., 2017; Siering et al., 2018) are first extracted from the reviews, with machine learning methods used over such data to train a predictive model. In a few papers, unsupervised learning based approaches have been used to rank reviews based on their helpfulness or relevance (Tsur and Rappoport, 2006; Wu et al., 2011; Woloszyn et al., 2017) . It is very apparent that the majority of work has been focused on using supervised machine learning and unsupervised learning has not been well explored in this space. Supervised learning methods depend on large, annotated datasets to train the model. Unfortunately, most of the publicly available online reviews datasets do not have labels related to their helpfulness. This makes the unsupervised learning based approaches much more attractive and hence it is the focus of our work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 314, |
|
"text": "(Martin and Pu, 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 336, |
|
"text": "Krishnamoorthy, 2015;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 361, |
|
"text": "Malik and Hussain, 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 381, |
|
"text": "Singh et al., 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 398, |
|
"text": "Wu et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 425, |
|
"text": "Enamul Haque et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 447, |
|
"text": "Alsmadi et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 554, |
|
"text": "(Krishnamoorthy, 2015;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 579, |
|
"text": "Malik and Hussain, 2017;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 596, |
|
"text": "Wu et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 692, |
|
"text": "(Singh et al., 2017;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 693, |
|
"end": 714, |
|
"text": "Siering et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 959, |
|
"end": 985, |
|
"text": "(Tsur and Rappoport, 2006;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 986, |
|
"end": 1002, |
|
"text": "Wu et al., 2011;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1025, |
|
"text": "Woloszyn et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A review ranking method based on unsupervised learning was proposed by Tsur and Rappoport (2006) . The authors first created a corpus of core dominant terms for the reviews representing the key aspects relevant to a specific product. Dominant terms were obtained by computing the frequency of all terms in a reviews collection and re-ranking them by their frequency in the reference to the British National Corpus, a baseline corpus. They named the corpus as virtual core (VC) review and represented it as feature vectors. Following this, they ranked the reviews according to their distance from the virtual core review vector. They assumed that the smaller the distance between a review and the virtual core review, the more relevant/helpful the review is. Wu et al. (2011) proposed a ranking method to detect low quality reviews by using link analysis techniques. Three ranking algorithms have been implemented in their study which are (1) PageRank algorithm (Page et al., 1999) , (2) HITS algorithm (Kleinberg et al., 2011) , and (3) Length algorithm. First, they construct a graph for each review of a product where the vertexes are sentences in a review. Two directional edges between two vertexes are induced if they are similar according to specific POS tag i.e., nouns, adjectives, and verb. They compute the centrality scores of sentences using the PageRank and the HITS algorithms. A score for each review was obtained by summing all the centrality scores of all the sentences in a review and then rank the review based on the high centrality scores. The Length algorithm was used to rank all reviews based on total number of words. They count the number of words for each review and rank it based on the high score. The authors conjecture that high-quality review should contain more words than poor reviews. Two baseline methods were used for comparison in their experimental evaluation. From the evaluation, it could however be observed that their results were only slightly inferior in comparison to the baselines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 96, |
|
"text": "Tsur and Rappoport (2006)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 774, |
|
"text": "Wu et al. (2011)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 980, |
|
"text": "(Page et al., 1999)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1002, |
|
"end": 1026, |
|
"text": "(Kleinberg et al., 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Inspired by the work proposed in (Martin and Pu, 2014) , Woloszyn et al. (2017) developed MRR (Most Relevant Review), a novel unsupervised algorithm to rank reviews based on their estimated relevance. MRR algorithm consists of three steps: (1) First, they construct a graph of reviews for each product where the nodes are the reviews and the edges are defined based on the similarity between pairs of reviews. Two similarity scores are considered: cosine similarity between TF-IDF vectors computed for each review, and similarity between rating scores of reviews (i.e., rating scores from 1 to 5 given by reviewers), (2) This is followed by graph pruning that works by removing all edges with the similarity scores lower that the minimum threshold value, (manually set as \u03b2=0.85533), (3) Finally, the centrality scores are calculated for each review using PageRank algorithm. The authors hypothesise that the more central reviews should be considered as most relevant. Two state-of-the-art unsupervised learning (Tsur and Rappoport, 2006; Wu et al., 2011) and two supervised learning methods (i.e., one of the method use the same features as (Wu et al., 2011) ) were adopted in the experimental evaluation for comparison. Although, their results were lower than those obtained by supervised learning methods, they outperformed the two unsupervised learning based approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 54, |
|
"text": "(Martin and Pu, 2014)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1012, |
|
"end": 1038, |
|
"text": "(Tsur and Rappoport, 2006;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1039, |
|
"end": 1055, |
|
"text": "Wu et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1142, |
|
"end": 1159, |
|
"text": "(Wu et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this work, we propose a new unsupervised method for ranking online reviews based on their helpfulness. Apart from the relevance (as in case of the existing unsupervised techniques), our method also considers the emotional intensity and the specificity of the reviews while assessing their helpfulness; this makes it unlike any of the approaches discussed above. For the text representation, we apply the Roberta state-of-the-art language model as opposed to TF-IDF used by the existing unsupervised methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The key novelty of the proposed method is that it incorporates three different characteristics of online reviews while ranking them according to their helpfulness. We hypothesise that the helpfulness of a review should be determined based on the following features: a) Relevance. Relevance indicates how well a review matches with customer's specific information needs (Liu et al., 2019) . In other words, a helpful review should discuss the key features of a product, which are important for the future buyers (e.g., \"The camera is easy to use, it is compact and perfect for travelling.\"). Review's relevance has been modelled by the existing work (Wu et al., 2011; Woloszyn et al., 2017) using graph composed of all reviews, their similarities and various centrality measures. It was assumed that the reviews that are the most central within the graph contain the most relevant information about the product. In our work, we take a similar approach, however, instead of graphs we used a simpler pair similarity based method. b) Emotional Intensity. We hypothesise that emotions play an important part in a review process as they allow customers to express their feelings and experiences through opinions. Therefore, a good review should contain a good balance of both, facts and emotions. The relationship between helpfulness of online reviews and emotions have been explored by Malik and Hussain (2017) where they stud-ied which emotions are important for helpfulness prediction. Martin and Pu (2014) used emotions to detect helpful reviews by applying different classification models (i.e., SVM, Random Forest, and Na\u00efve Bayes) and demonstrated that their approach outperformed methods using POS tagging features. Emotion information has not been considered by any of the existing unsupervised methods. In this work, we propose to consider the level of emotions contained within a review as one of the factors in determining their helpfulness. c) Specificity. A review of a product will be considered as useful/informative if it discusses various features of the products. In other words, instead of just expressing satisfaction/dissatisfaction from a product (e.g. \"I hate this camera and would not recommend it\"), it is much more helpful if the review explains what good or bad there is about the product (e.g. \"The battery life is too short and the zoom is rather poor.\"). The greater number of different features is mentioned in a review, the more informative the review is for any potential buyer/customer. It should be noted that there is a distinct difference between the relevance and the specificity. With relevance, we assess whether the key characteristic of a product was discussed. While with specificity, we evaluate the level of details that was provided while discussing different features of a product. Following this reasoning, we propose to consider the number of different entities mentioned in the reviews while ranking the reviews based on their helpfulness. Such a specificity feature has also not been considered by any of the existing work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 387, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 666, |
|
"text": "(Wu et al., 2011;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 667, |
|
"end": 689, |
|
"text": "Woloszyn et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1381, |
|
"end": 1405, |
|
"text": "Malik and Hussain (2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1503, |
|
"text": "Martin and Pu (2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Apart from the aforementioned characteristics, we also consider the star rating of the reviews in our ranking process. It has been demonstrated in the literature that the application of star rating is beneficial when evaluating the helpfulness of a review (Tsur and Rappoport, 2006; Schuff and Mudambi, 2010; Singh et al., 2017) . The pseudocode of our proposed methods is presented in Algorithm 1. The input to the method is a collection of reviews related to the same products. Each review contains the review text (r) and the star rating associated with this review (s). In the first step of the algorithm, the input reviews are ranked separately on the basis of their relevance, emotional intensity and specificity. For the relevance ranking, we create a product-specific \"summary document\" (sum), which contains all individual reviews collated together. The summary document and each individual review are then converted into vectors using the RoBERTa pre-trained language model (Liu et al., 2019) . For this part, any other embedding model (such as Word2Vec or Glove) can be considered. We used the RoBERTa model as it has recently received state-of-the-art results on many NLP benchmark datasets (Liu et al., 2019) . Following this, the cosine similarity between each individual review and the summary document is calculated as its relevance score. It is worth noting that the proposed relevance ranking method is much simpler and faster than those of the baseline, which uses graphs to model similarity between reviews. With the second ranking, the reviews are ranked based on their emotional intensity. To identify different emotions in the reviews we used the DepecheMood++ (Araque et al., 2018) lexicon that contains 187942 words with 8 emotions intensity value for each word; this could be replaced with any emotion lexicon. For each review, we first identify all words which are present in the lexicon. Following this, all the intensity values assigned to those words in the lexicon are added together. The final emotion score assigned to each review is the accumulation of intensity value by summing all emotion words within this review.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 282, |
|
"text": "(Tsur and Rappoport, 2006;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 308, |
|
"text": "Schuff and Mudambi, 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 328, |
|
"text": "Singh et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 984, |
|
"end": 1002, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1203, |
|
"end": 1221, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1684, |
|
"end": 1705, |
|
"text": "(Araque et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Finally, for the specificity ranking, we first apply name entity recognition and extract entities from the reviews using the NLTK library 2 . We calculate the specif icity score for each review as the sum of all entities that it contains. All the reviews are then sorted separately based on the three scores. As the outputs of the aforementioned steps, we obtained three rankings of the reviews, which were constructed based on the relevance, emotional intensity, and specificity of the reviews (lines 15-17).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As mentioned earlier, we also consider the star rating in our ranking method as it is considered as an good indicator of reviews helpfulness (Tsur and Rappoport, 2006; Schuff and Mudambi, 2010; Singh et al., 2017) . We process the star rating by calculating the absolute deviation. The use of star rating deviation as a feature has been demonstrated in (Jindal and Liu, 2008; Lim et al., 2010; Jiang et al., 2013; Xu, 2013; Savage et al., 2015; Saumya and Singh, 2018) and some of the authors apply absolute deviation for the star rating (Danescu-Niculescu-Mizil et al., 2009; Mukherjee et al., 2013a,b; Runa et al., 2017) . First, we calculate the average of all star ratings of a product review (line 20). In the next step, for each review r i , we calculate its absolute deviation (AD) from the average star rating as per Eq 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 167, |
|
"text": "(Tsur and Rappoport, 2006;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 193, |
|
"text": "Schuff and Mudambi, 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 213, |
|
"text": "Singh et al., 2017)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 375, |
|
"text": "(Jindal and Liu, 2008;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 393, |
|
"text": "Lim et al., 2010;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "Jiang et al., 2013;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 423, |
|
"text": "Xu, 2013;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 444, |
|
"text": "Savage et al., 2015;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 468, |
|
"text": "Saumya and Singh, 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 576, |
|
"text": "(Danescu-Niculescu-Mizil et al., 2009;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 603, |
|
"text": "Mukherjee et al., 2013a,b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 622, |
|
"text": "Runa et al., 2017)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "AD i = |s i \u2212 avg| RAD i = (1 \u2212 \u03b1) * AD i (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where s i is star rating for review r i , typically between 1 and 5. Finally we calculate the rating absolute deviation (RAD) (line 22) as per equation 1, where \u03b1 is used to balance the impact of the star rating on the final ranking and its value has been adopted from (Woloszyn et al., 2017), \u03b1 = 0.867168.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The RAD value will be further included in the final ranking process together with the other three rankings as explained below. For combining the three rankings (i.e., relevance, emotional intensity and specificity), we applied the z-score minimization method (Standard score, 2021). First, the mean (\u00b5) and the standard deviation (\u03c3) of the three ranking positions are computed for each review r i \u2208 R. In the next step we calculate the z-score distance matrix calculating the z-score for each review and every possible ranking position according to the following formula (lines 21-26):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "z-score = |(p \u2212 \u00b5)/\u03c3| (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where p is the proposed ranking position. The intuition behind this is to find the most statistically best ranking position by minimizing the aggregate z-score distance globally. The idea is from where they used exhaustive process for all possible features combination to find the best combination for helpfulness prediction. However, instead of using exhaustive process, we use a faster approach. The rows of matrix represent the number of reviews for each product and the columns represent the number of possible positions in the ranking (i.e., this is a squared matrix). Each cell of the matrix (c ij ) contains a position score calculated for review r i and position j using equation 2. The z-score tells us how far each of the proposed ranking positions is from the mean position of the review. We further add the previously calculated RAD value to the z-scores Algorithm 1 The proposed algorithm for ranking online reviews based on their helpfulness Require: List of reviews and their star ratings R = {(ri, si)}i=1..n related to a single product Ensure: The reviews ranked according to their helpfulness 1: join review = join all reviews in R 2: sum = convert join review into Roberta embedding 3: for each review ri in R do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ri embed = convert ri into Roberta embedding", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ri relevance score = CosineSimilarity(sum, ri embed)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for each word wj in ri do ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ri specif icity score = count number of entities in ri 13: end for 14: 15: rank1 = rank R based on {ri relevance score}i=1...n 16: rank2 = rank R based on {ri emotion score}i=1...n 17: rank3 = rank R based on {ri specif icity score}i=1...n 18: rank combine = combine all ranking (rank1,rank2,rank3) 19: 20: avg star = average of all star ratings {si}i=1...n 21: for each ri in R do 22: 34: end for 35: select column where total score=max(total score) 36: assign review at the position where position score=min(position score) 37: delete the column and row and repeat step 28-37 until convergence in the matrix. The final step is to find out which set of ranking positions of the reviews gives the lowest total z-score distance. For this purpose we use an iterative solution (lines 28-37) which is explained below. For each column, we sum its values and subtract the minimum value from this column, obtaining a score referred to as total score. Then, we select the column with the maximum total score. After that, we find the minimum value in that column. The corresponding review is then assigned to the position. The next step is to delete the column and row and repeat the same for the rest of reviews until all the positions are filled. For instance, if the largest total score is at column 4 and the minimum position score on that column belongs to review 1 , then assign the review 1 to that position, i.e 4 which is now the re-ranked position of review 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 298, |
|
"text": "(rank1,rank2,rank3)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "12:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "RADi = (1 \u2212 \u03b1) * |si \u2212", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "12:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the purpose of this study, we use dataset from Amazon 3 reviews (from May 1996 -July 2014) for 3 http://jmcauley.ucsd.edu/data/amazon/ four categories of products, namely (1) Electronics, (2) Books, (3) CDs Vinyls and (4) Movies TV products, with raw data size of 1.48 GB, 9.46 GB, 1.33 GB, and 1.93 GB respectively. In this study, we only use four features: ASIN as a unique product id, ReviewText for performing the three rankings, Overall in order to include the rating star in the final ranking and Helpfulness Votes for the evaluation purposes. All the data has been processed and filtered according to the following steps. First, the product should have minimum 30 reviews. Each review should contain minimum four sentences. The review should have minimum five helpfulness votes. The details regarding the size of each dataset before and after pre-processing are listed in Table 1 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 883, |
|
"end": 890, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As a baseline, we implemented the state-of-theart unsupervised ranking method MRR (Woloszyn et al., 2017), which has been described in Section 2. This is the most recent work that has been done in this space using unsupervised learning. In the original paper (Woloszyn et al., 2017) , the results were also compared with two other unsupervised approaches and supervised models and it was demonstrated that MRR outperformed others baseline (Tsur and Rappoport, 2006; Wu et al., 2011) . Therefore, we only use MRR as the baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 465, |
|
"text": "(Tsur and Rappoport, 2006;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 482, |
|
"text": "Wu et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 282, |
|
"text": "(Woloszyn et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For further evaluation, we also explored different variants of our proposed methods. We considered using the summary of the reviews instead of their full content. The summaries of the reviews were first obtained with the SUMY library 4 and then provided as an input to the algorithm describe in Algorithm 1. We also evaluated the performance of our method using only the relevance ranking. In this way we wanted to validate the usefulness of emotional intensity and specificity rankings in the process. Finally, we considered the performance of our method without application of the rating star.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For the evaluation, we use NDCG (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) metric. NDCG measures the quality of ranking or recommendation system using list positions. For the purpose of ranking evaluation with NDCG, we use the helpfulness vote's feature as the relevance value to determine the ranking. The relevance value for the NDCG is calculated based on the helpfulness vote obtained from Amazon using the gold standard as in (Woloszyn et al., 2017) . The gold standard formula is in Eq 3 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 63, |
|
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 443, |
|
"text": "(Woloszyn et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "H(r \u2208 R) = vote + (r) vote + (r) + vote \u2212 (r)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Baseline and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Where r is a review, vote+ is the number of customers who voted for the review as being helpful and vote\u2212 is for the customers who votes it as being unhelpful. H(r \u2208 R) is then used as the relevance value for the NDCG.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline and Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The results obtained by each of the evaluated methods on each of the four datasets are presented in Tables 2-5 . Each table demonstrates the result obtained by each of the methods with and without incorporating the star rating in the process. The first row in each of the tables refers to the results obtained by the state-of-the-art (MRR) unsupervised baseline (Woloszyn et al., 2017) . Rows 2 and 3 show the results obtained by our method based only on the relevance ranking and using the full text or the summary of the reviews, respectively. The last two rows refer to the results obtained by the method when all three rankings were incorporated in the process. We evaluate our ranking quality using NDCG metrics and we take four different ranking positions. Those are NDCG@3, NDCG@5, NDCG@7, and NDCG@10 where the number after the NDCG@ represent the number of reviews taken for evaluation from the top rank position.", |
|
"cite_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 385, |
|
"text": "(Woloszyn et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 110, |
|
"text": "Tables 2-5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "From the results presented in Tables 2-5 we can observe that for each of the four datasets, the proposed method performed better when all three ranking were incorporated. This indicated that the emotional intensity and the specificity of a review are useful when determining its helpfulness. It can also be noted that our method obtained better results when the star rating is used when creating the final ranking. The difference is particularly apparent for the Books dataset. Finally, we can see that the proposed method performed better when applied with the full review content (Relevance(full text)+emotion+specify) than with the summary (Relevance(summary)+emotion+specify) with three out of four datasets. The only case when using the summaries of the reviews made a positive difference is the CDs & Vinyls dataset. Looking at the overall results (Both with and without rating star) we can conclude that our proposed method performs best when each of the three rankings is performed on the full reviews' content and when the rating star is considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "When comparing the proposed method with the baseline (MRR), we can observe from Tables 2-5 that we obtained better results according to each of the evaluation scores (NDCG@3, NDCG@5, NDCG@7, NDCG@10) in all datasets. For example, Table 2 shows our combination ranking score (relevance+emotion+specify) at NDCG@3, NDCG@5, NDCG@7, NDCG@10 are 0.982, 0.977, 0.974, and 0.972, respectively which improves by 1% from the baseline. On other datasets, the improvement is showing up to 2% compare with the baseline at NDCG@5 on Books dataset and NDCG@3 on Movies & TV dataset. To further evaluate the proposed method in comparison to the baseline, we assess whether the differences in their Method with rating star without rating star NDCG@3 NDCG@5 NDCG@7 NDCG@10 NDCG@3 NDCG@5 NDCG@7 NDCG@10 Method with rating star without rating star NDCG@3 NDCG@5 NDCG@7 NDCG@10 NDCG@3 NDCG@5 NDCG@7 NDCG@10 MRR 0 performances are statistically significant using the T-test. According to 0.05 significance level, the difference was statistically significant in 11 out of 16 cases. As the 16 cases we consider four different performance measures. (NDCG@3, NDCG@5, NDCG@7, NDCG@10) calculated for each of the four datasets. The results obtained by our method on the books and CDs & Vinyls datasets are numerically superior in all four cases. As for electronic dataset, only NDCG@3 and NDCG@5 are statistically different, while on Movie & TV dataset, only one result that shows the difference in statistic, it is NDCG@10.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 237, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This paper addresses the problem of online reviews ranking according to their helpfulness. We propose an unsupervised method, which first ranks the reviews based on their relevance, emotional intensity and specificity and then combine them in order to obtain the final helpfulness ranking. The perfor-mance of the method on four datasets that were created for the purpose of this study was evaluated using the NDCG metric. It was demonstrated that the method outperformed the state-of-the-art unsupervised online review ranking method proposed in (Woloszyn et al., 2017) in every case. In the future, we want to improve our ranking system by applying different features and ranking method. Some features such as linguistic features, positive and negative emotion, or topic sentences may be explore in the ranking system. Moreover, different combination ranking method such as Schulze (Schulze, 2018) or Borda count (Emerson, 2013) or another ranking method could be explored to improve the performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 570, |
|
"text": "(Woloszyn et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 899, |
|
"text": "(Schulze, 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 915, |
|
"end": 930, |
|
"text": "(Emerson, 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://www.nltk.org/book/ch07.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://pypi.org/project/sumy/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Employing Deep Learning Methods for Predicting Helpful Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Abdalraheem", |
|
"middle": [], |
|
"last": "Alsmadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shadi", |
|
"middle": [], |
|
"last": "Alzu'bi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bilal", |
|
"middle": [], |
|
"last": "Hawashin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Al-Ayyoub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Jararweh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "11th International Conference on Information and Communication Systems, ICICS 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICICS49469.2020.239504" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdalraheem Alsmadi, Shadi Alzu'bi, Bilal Hawashin, Mahmoud Al-Ayyoub, and Yaser Jararweh. 2020. Employing Deep Learning Methods for Predicting Helpful Reviews. 2020 11th International Confer- ence on Information and Communication Systems, ICICS 2020, pages 7-12.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Amazon's Top Customer Reviewers", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Amazon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Com", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amazon.com. Amazon's Top Customer Reviewers.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "DepecheMood++: a Bilingual Emotion Lexicon Built Through Simple Yet Powerful Techniques", |
|
"authors": [ |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "Araque", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lorenzo", |
|
"middle": [], |
|
"last": "Gatti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacopo", |
|
"middle": [], |
|
"last": "Staiano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Guerini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oscar Araque, Lorenzo Gatti, Jacopo Staiano, and Marco Guerini. 2018. DepecheMood++: a Bilin- gual Emotion Lexicon Built Through Simple Yet Powerful Techniques.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "How opinions are received by online communities: a case study on amazon. com helpfulness votes", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Danescu-Niculescu-Mizil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gueorgi", |
|
"middle": [], |
|
"last": "Kossinets", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 18th international conference on World wide web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, and Lillian Lee. 2009. How opinions are received by online communities: a case study on amazon. com helpfulness votes. In Proceedings of the 18th international conference on World wide web, pages 141-150.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Feature selection for helpfulness prediction of online product reviews: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "Jiahua", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Rong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Michalska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanchun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "PloS one", |
|
"volume": "14", |
|
"issue": "12", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiahua Du, Jia Rong, Sandra Michalska, Hua Wang, and Yanchun Zhang. 2019. Feature selection for helpfulness prediction of online product reviews: An empirical study. PloS one, 14(12):e0226902.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The original borda count and partial voting", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"Emerson" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Social Choice and Welfare", |
|
"volume": "40", |
|
"issue": "2", |
|
"pages": "353--358", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Emerson. 2013. The original borda count and par- tial voting. Social Choice and Welfare, 40(2):353- 358.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Helpfulness prediction of online product reviews", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Md Enamul Haque", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the ACM Symposium on Document", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3209280.3229105" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Enamul Haque, Mehmet Engin Tozal, and Aminul Islam. 2018. Helpfulness prediction of online prod- uct reviews. Proceedings of the ACM Symposium on Document Engineering 2018, DocEng 2018.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Cumulated gain-based evaluation of IR techniques", |
|
"authors": [ |
|
{ |
|
"first": "Kalervo", |
|
"middle": [], |
|
"last": "J\u00e4rvelin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaana", |
|
"middle": [], |
|
"last": "Kek\u00e4l\u00e4inen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "422--446", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/582415.582418" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422- 446.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Detecting product review spammers using activity model", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceeding of International Conference on Advanced Computer Science and Electronics Information (ICAC-SEI 2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "650--653", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Jiang, B Chen, et al. 2013. Detecting product re- view spammers using activity model. In Proceed- ing of International Conference on Advanced Com- puter Science and Electronics Information (ICAC- SEI 2013), pages 650-653. Citeseer.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Opinion spam and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 international conference on web search and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "219--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Jindal and Bing Liu. 2008. Opinion spam and analysis. In Proceedings of the 2008 international conference on web search and data mining, pages 219-230.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Authoritative sources in a hyperlinked environment", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Jon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Albert-L\u00e1szl\u00f3", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan J", |
|
"middle": [], |
|
"last": "Barab\u00e1si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Watts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jon M Kleinberg, Mark Newman, Albert-L\u00e1szl\u00f3 Barab\u00e1si, and Duncan J Watts. 2011. Authoritative sources in a hyperlinked environment. Princeton University Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Linguistic features for review helpfulness prediction", |
|
"authors": [ |
|
{ |
|
"first": "Srikumar", |
|
"middle": [], |
|
"last": "Krishnamoorthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Expert Systems with Applications", |
|
"volume": "42", |
|
"issue": "7", |
|
"pages": "3751--3759", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.eswa.2014.12.044" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srikumar Krishnamoorthy. 2015. Linguistic features for review helpfulness prediction. Expert Systems with Applications, 42(7):3751-3759.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Detecting product review spammers using rating behaviors", |
|
"authors": [ |
|
{ |
|
"first": "Ee-Peng", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Viet-An", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Jindal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hady Wirawan", |
|
"middle": [], |
|
"last": "Lauw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 19th ACM international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "939--948", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.tele.2018.01.001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ee-Peng Lim, Viet-An Nguyen, Nitin Jindal, Bing Liu, and Hady Wirawan Lauw. 2010. Detecting product review spammers using rating behaviors. In Pro- ceedings of the 19th ACM international conference on Information and knowledge management, pages 939-948.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Using Argument-based features to predict and analyse review helpfulness", |
|
"authors": [ |
|
{ |
|
"first": "Haijing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pin", |
|
"middle": [], |
|
"last": "Lv", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengxue", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiqiang", |
|
"middle": [], |
|
"last": "Geng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minglan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP 2017 -Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1358--1363", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/d17-1142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haijing Liu, Yang Gao, Pin Lv, Mengxue Li, Shiqiang Geng, Minglan Li, and Hao Wang. 2017. Using Argument-based features to predict and analyse re- view helpfulness. EMNLP 2017 -Conference on Empirical Methods in Natural Language Processing, Proceedings, pages 1358-1363.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Helpfulness of product reviews as a function of discrete positive and negative emotions", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S I" |
|
], |
|
"last": "Malik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ayyaz", |
|
"middle": [], |
|
"last": "Hussain", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computers in Human Behavior", |
|
"volume": "73", |
|
"issue": "", |
|
"pages": "290--302", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.chb.2017.03.053" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. S.I. Malik and Ayyaz Hussain. 2017. Helpfulness of product reviews as a function of discrete positive and negative emotions. Computers in Human Behav- ior, 73:290-302.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Prediction of helpful reviews using emotions extraction", |
|
"authors": [ |
|
{ |
|
"first": "Lionel", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pearl", |
|
"middle": [], |
|
"last": "Pu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the National Conference on Artificial Intelligence", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1551--1557", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lionel Martin and Pearl Pu. 2014. Prediction of help- ful reviews using emotions extraction. Proceedings of the National Conference on Artificial Intelligence, 2:1551-1557.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "What yelp fake review filter might be doing?", |
|
"authors": [ |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Venkataraman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalie", |
|
"middle": [], |
|
"last": "Glance", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Seventh international AAAI conference on weblogs and social media", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie Glance. 2013a. What yelp fake review fil- ter might be doing? In Seventh international AAAI conference on weblogs and social media.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Fake review detection: Classification and analysis of real and pseudo reviews", |
|
"authors": [ |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Venkataraman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalie", |
|
"middle": [], |
|
"last": "Glance", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arjun Mukherjee, Vivek Venkataraman, Bing Liu, Na- talie Glance, et al. 2013b. Fake review detection: Classification and analysis of real and pseudo re- views. UIC-CS-03-2013. Technical Report.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The pagerank citation ranking: Bringing order to the web", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Page", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Brin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajeev", |
|
"middle": [], |
|
"last": "Motwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Winograd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford InfoLab.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Try to find fake reviews with semantic and relational discovery", |
|
"authors": [ |
|
{ |
|
"first": "Xianguo", |
|
"middle": [], |
|
"last": "Dao Runa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongxin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 13th International Conference on Semantics, Knowledge and Grids (SKG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "234--239", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dao Runa, Xianguo Zhang, and Yongxin Zhai. 2017. Try to find fake reviews with semantic and relational discovery. In 2017 13th International Conference on Semantics, Knowledge and Grids (SKG), pages 234-239. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Detection of spam reviews: a sentiment analysis approach", |
|
"authors": [ |
|
{ |
|
"first": "Sunil", |
|
"middle": [], |
|
"last": "Saumya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyoti", |
|
"middle": [ |
|
"Prakash" |
|
], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Csi Transactions on ICT", |
|
"volume": "6", |
|
"issue": "2", |
|
"pages": "137--148", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunil Saumya and Jyoti Prakash Singh. 2018. Detec- tion of spam reviews: a sentiment analysis approach. Csi Transactions on ICT, 6(2):137-148.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Detection of opinion spam based on anomalous rating deviation. Expert Systems with Applications", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Savage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiuzhen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinghuo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pauline", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qingmai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "8650--8657", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Savage, Xiuzhen Zhang, Xinghuo Yu, Pauline Chou, and Qingmai Wang. 2015. Detection of opin- ion spam based on anomalous rating deviation. Ex- pert Systems with Applications, 42(22):8650-8657.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "What Makes a Helpful Online Review? A Study of Customer Reviews on", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Schuff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Mudambi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Amazon.com1 By", |
|
"volume": "34", |
|
"issue": "1", |
|
"pages": "185--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Schuff and Susan Mudambi. 2010. What Makes a Helpful Online Review? A Study of Customer Re- views on Amazon.com1 By:. 34(1):185-200.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The schulze method of voting", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Schulze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.02973" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Schulze. 2018. The schulze method of voting. arXiv preprint arXiv:1804.02973.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Explaining and predicting online review helpfulness: The role of content and reviewer-related signals", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Siering", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Muntermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balaji", |
|
"middle": [], |
|
"last": "Rajagopalan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Decision Support Systems", |
|
"volume": "108", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.dss.2018.01.004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Siering, Jan Muntermann, and Balaji Ra- jagopalan. 2018. Explaining and predicting on- line review helpfulness: The role of content and reviewer-related signals. Decision Support Systems, 108:1-12.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Predicting the \"helpfulness\" of online consumer reviews", |
|
"authors": [ |
|
{ |
|
"first": "Jyoti", |
|
"middle": [ |
|
"Prakash" |
|
], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seda", |
|
"middle": [], |
|
"last": "Irani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nripendra", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Rana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yogesh", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Dwivedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunil", |
|
"middle": [], |
|
"last": "Saumya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradeep Kumar", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Business Research", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "346--355", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.jbusres.2016.08.008" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jyoti Prakash Singh, Seda Irani, Nripendra P. Rana, Yo- gesh K. Dwivedi, Sunil Saumya, and Pradeep Kumar Roy. 2017. Predicting the \"helpfulness\" of online consumer reviews. Journal of Business Research, 70:346-355.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The Magic Behind Amazon's 2.7 Billion Dollar Question-UX Articles by UIE", |
|
"authors": [ |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Spool", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jared Spool. 2009. The Magic Behind Amazon's 2.7 Billion Dollar Question-UX Articles by UIE. Standard score. 2021. Standard score -Wikipedia.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "R EV R ANK : a Fully Unsupervised Algorithm for Selecting the Most Helpful Book Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Tsur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "154--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Tsur and Ari Rappoport. 2006. R EV R ANK : a Fully Unsupervised Algorithm for Selecting the Most Helpful Book Reviews. Artificial Intelligence, pages 154-161.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "MRR: An unsupervised algorithm to rank reviews by relevance", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vinicius Woloszyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Henrique", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leandro", |
|
"middle": [ |
|
"Krug" |
|
], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Wives", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings -2017 IEEE/WIC/ACM International Conference on Web Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "877--883", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3106426.3106444" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vinicius Woloszyn, Henrique D.P. Dos Santos, Lean- dro Krug Wives, and Karin Becker. 2017. MRR: An unsupervised algorithm to rank reviews by rel- evance. Proceedings -2017 IEEE/WIC/ACM Inter- national Conference on Web Intelligence, WI 2017, pages 877-883.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "An unsupervised approach to rank product reviews", |
|
"authors": [ |
|
{ |
|
"first": "Jianwei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings -2011 8th International Conference on Fuzzy Systems and Knowledge Discovery", |
|
"volume": "2011", |
|
"issue": "", |
|
"pages": "1769--1772", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/FSKD.2011.6019793" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianwei Wu, Bing Xu, and Sheng Li. 2011. An un- supervised approach to rank product reviews. Pro- ceedings -2011 8th International Conference on Fuzzy Systems and Knowledge Discovery, FSKD 2011, 3:1769-1772.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Temporal Model of the Online Customer Review Helpfulness Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Shih-Hung", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi-Hsiang", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang-Pu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ping-Che", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liu", |
|
"middle": [], |
|
"last": "Fanghuizhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "737--742", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3110025.3110156" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shih-Hung Wu, Yi-Hsiang Hsieh, Liang-Pu Chen, Ping-Che Yang, and Liu Fanghuizhu. 2017. Tem- poral Model of the Online Customer Review Help- fulness Prediction. pages 737-742.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Detecting collusive spammers in online review communities", |
|
"authors": [ |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the sixth workshop on Ph. D. students in information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang Xu. 2013. Detecting collusive spammers in online review communities. In Proceedings of the sixth workshop on Ph. D. students in information and knowledge management, pages 33-40.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "j] = sum all emotions intensities of wj from DepecheM ood++ = sum(emotion scores)", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"text": "Amazon dataset 964", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Evaluation metric Electronics dataset", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td colspan=\"8\">with rating star NDCG@3 NDCG@5 NDCG@7 NDCG@10 NDCG@3 NDCG@5 NDCG@7 NDCG@10 without rating star</td></tr><tr><td>MRR</td><td>0.957</td><td>0.944</td><td>0.94</td><td>0.936</td><td/><td/><td/><td/></tr><tr><td>Relevance (summary)</td><td>0.959</td><td>0.948</td><td>0.943</td><td>0.939</td><td>0.959</td><td>0.948</td><td>0.943</td><td>0.939</td></tr><tr><td>Relevance (full text)</td><td>0.958</td><td>0.946</td><td>0.94</td><td>0.937</td><td>0.958</td><td>0.946</td><td>0.94</td><td>0.937</td></tr><tr><td>Relevance(full text)+emotion+specify</td><td>0.969</td><td>0.957</td><td>0.952</td><td>0.946</td><td>0.958</td><td>0.945</td><td>0.94</td><td>0.936</td></tr><tr><td colspan=\"2\">Relevance(summary)+emotion+specify 0.968</td><td>0.957</td><td>0.951</td><td>0.946</td><td>0.957</td><td>0.945</td><td>0.939</td><td>0.935</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Evaluation metric Books dataset", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Method</td><td colspan=\"8\">with rating star NDCG@3 NDCG@5 NDCG@7 NDCG@10 NDCG@3 NDCG@5 NDCG@7 NDCG@10 without rating star</td></tr><tr><td>MRR</td><td>0.961</td><td>0.947</td><td>0.94</td><td>0.935</td><td/><td/><td/><td/></tr><tr><td>Relevance (summary)</td><td>0.96</td><td>0.945</td><td>0.939</td><td>0.935</td><td>0.96</td><td>0.945</td><td>0.939</td><td>0.935</td></tr><tr><td>Relevance (full text)</td><td>0.962</td><td>0.948</td><td>0.941</td><td>0.938</td><td>0.962</td><td>0.948</td><td>0.941</td><td>0.938</td></tr><tr><td>Relevance(full text)+emotion+specify</td><td>0.968</td><td>0.957</td><td>0.952</td><td>0.949</td><td>0.967</td><td>0.955</td><td>0.949</td><td>0.947</td></tr><tr><td colspan=\"2\">Relevance(summary)+emotion+specify 0.97</td><td>0.96</td><td>0.955</td><td>0.951</td><td>0.967</td><td>0.956</td><td>0.951</td><td>0.948</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"text": "Evaluation metric Movie & TV dataset", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |