|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:43.339567Z" |
|
}, |
|
"title": "Error-Sensitive Evaluation for Ordinal Target Variables", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Maury", |
|
"middle": [], |
|
"last": "Courtland", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Faulkner", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Aysu", |
|
"middle": [ |
|
"Ezen" |
|
], |
|
"last": "Can", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Product reviews and satisfaction surveys seek customer feedback in the form of ranked scales. In these settings, widely used evaluation metrics including F1 and accuracy ignore the rank in the responses (e.g., 'very likely' is closer to 'likely' than 'not at all'). In this paper, we hypothesize that the order of class values is important for evaluating classifiers on ordinal target variables and should not be disregarded. To test this hypothesis, we compared Multi-class Classification (MC) and Ordinal Regression (OR) by applying OR and MC to benchmark tasks involving ordinal target variables using the same underlying model architecture. Experimental results show that while MC outperformed OR for some datasets in accuracy and F1, OR is significantly better than MC for minimizing the error between prediction and target for all benchmarks, as revealed by error-sensitive metrics, e.g. mean-squared error (MSE) and Spearman correlation. Our findings motivate the need to establish consistent, error-sensitive metrics for evaluating benchmarks with ordinal target variables, and we hope that it stimulates interest in exploring alternative losses for ordinal problems.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Product reviews and satisfaction surveys seek customer feedback in the form of ranked scales. In these settings, widely used evaluation metrics including F1 and accuracy ignore the rank in the responses (e.g., 'very likely' is closer to 'likely' than 'not at all'). In this paper, we hypothesize that the order of class values is important for evaluating classifiers on ordinal target variables and should not be disregarded. To test this hypothesis, we compared Multi-class Classification (MC) and Ordinal Regression (OR) by applying OR and MC to benchmark tasks involving ordinal target variables using the same underlying model architecture. Experimental results show that while MC outperformed OR for some datasets in accuracy and F1, OR is significantly better than MC for minimizing the error between prediction and target for all benchmarks, as revealed by error-sensitive metrics, e.g. mean-squared error (MSE) and Spearman correlation. Our findings motivate the need to establish consistent, error-sensitive metrics for evaluating benchmarks with ordinal target variables, and we hope that it stimulates interest in exploring alternative losses for ordinal problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Organizations have a vested interest in ensuring customer happiness. To measure this quantity, analysts often use surveys containing numerical Likert scales addressing various aspects of the customer experience (Allen and Seaman, 2007) . One popular question asks customers the likelihood that they will recommend a product or service to others. From these answers, analysts calculate a \"Net Promoter Score\" (NPS) representing the percentage of customers who will recommend a product or service to others minus those who will recommend against it (Reichheld, 2003) . Additionally, many companies are also interested in tracking product reviews (Keung et al., 2020) . Collecting, measuring, and analyzing customer feedback is essential to the profitability and long-term success of many companies, but it is prohibitively expensive to survey the entire customer base and even the feedback that a company does receive is often too massive for systematic human evaluation. Therefore, it is important to develop effective machine learning models for predicting customer satisfaction and to maintain consistent and accurate methods and metrics for evaluating their performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 235, |
|
"text": "(Allen and Seaman, 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 564, |
|
"text": "(Reichheld, 2003)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 644, |
|
"end": 664, |
|
"text": "(Keung et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An important aspect of modeling customer sentiment is the subjective numerical ranking of the customer response. Feedback is often in the form of ranked scales, e.g. rating scales 1-5 or 1-10, or textual \"Strongly Agree,\" \"Disagree,\" etc. Crucially, these Likert scales are ordinal and should not be confused with scalar values: a rating of 5 or \"Strongly Agree\" is not necessarily 5 times greater than a rating of 1 or \"Strongly Disagree\" (Allen and Seaman, 2007) . To predict ordinal target variables from textual input, we explored a variety of commonplace and cutting-edge NLP techniques, ranging from linear models such as Naive Bayes and Logistic Regression to Transformer-based approaches such as BERT (Devlin et al., 2019) and \"Performer\" (Choromanski et al., 2021) . One of the most striking observations from our experiments was that the most impactful experimental variable was not necessarily the model architecture itself, but rather the loss function employed and classification scheme, i.e. target variable encoding. Specifically, we highlight our use of Ordinal Regression (OR), which is an approach that is underutilized in the field of NLP. This approach has a long history (Frank and Hall, 2001; Graepel and Obermayer, 1999; Guti\u00e9rrez et al., 2016; Baccianella et al., 2009) and has recently been applied to deep neural network models predicting ordinal labels (Cao et al., 2020) . Here, we extend this framework into the domain of NLP and apply it to train transformer models as a novel application of previous work on OR and transformers. Our results showed that OR produced a distribution of predictions significantly closer to ground truth distributions than Multi-class Classification (MC) for both NPS and survey ratings (i.e. lower K-L divergence), while producing similar accuracies and F1. Additionally, we found that OR improved the correlation between model predictions and ground truth. Our findings highlight that common NLP metrics are insufficient to distinguish the better model(s) for our task, which motivated us to explore and identify error-sensitive metrics more consistent and effective for evaluating models with ordinal target variables.", |
|
"cite_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 464, |
|
"text": "(Allen and Seaman, 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 709, |
|
"end": 730, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 773, |
|
"text": "(Choromanski et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1192, |
|
"end": 1214, |
|
"text": "(Frank and Hall, 2001;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1215, |
|
"end": 1243, |
|
"text": "Graepel and Obermayer, 1999;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1244, |
|
"end": 1267, |
|
"text": "Guti\u00e9rrez et al., 2016;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1268, |
|
"end": 1293, |
|
"text": "Baccianella et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1380, |
|
"end": 1398, |
|
"text": "(Cao et al., 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There has been a longstanding debate (Knapp, 1990; Joshi et al., 2015) within the survey, psychometrics, and crowdsourcing methodology community regarding the use of Likert scales. In particular, a debate on whether the level of measurement in question is interval or ordinal (McCall, 2001 ) and whether parametric or nonparametric tests should should be used (Kuzon et al., 1996) . In a classification setting involving ordinal data, a related question involves the use of multinomial versus ordinal modeling for such data. Intuitively, the inclusion of ordinality in the classification model should improve performance relative to a multinomial approach, as shown in Campbell and Donner 1989 . Despite the prevalence of ordinally-scaled tasks in NLP, such as in sentiment analysis (Jiang et al., 2019; Pang and Lee, 2005) , stance classification (Sobhani et al., 2015) , lexical specificity (Gao et al., 2019) , political bias (Baly et al., 2018) , and commonsense inference (Zhang et al., 2017) , approaches to such tasks have tended to ignore the ordinal nature of the data, treating them as MC tasks. For example, only 2 of 11 participants in subtask C of SemEval 2016 (Rosenthal et al., 2017 ) (a 5-point-scale twitter sentiment classification task) chose to exploit the ordinal nature of the task in their models. Justifying this choice would involve systematically comparing the use of ordinal versus MC models for these tasks, yet most such comparisons have been reported as an incidental part of experiments for tasks such as sentiment analysis in tweets (Saad and Yang, 2019) and classification of psychiatric symptom severity in clinical notes (Rios and Kavuluru, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 50, |
|
"text": "(Knapp, 1990;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 70, |
|
"text": "Joshi et al., 2015)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 289, |
|
"text": "(McCall, 2001", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 380, |
|
"text": "(Kuzon et al., 1996)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 693, |
|
"text": "Campbell and Donner 1989", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 783, |
|
"end": 803, |
|
"text": "(Jiang et al., 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 804, |
|
"end": 823, |
|
"text": "Pang and Lee, 2005)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 870, |
|
"text": "(Sobhani et al., 2015)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 893, |
|
"end": 911, |
|
"text": "(Gao et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 948, |
|
"text": "(Baly et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 977, |
|
"end": 997, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1174, |
|
"end": 1197, |
|
"text": "(Rosenthal et al., 2017", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1656, |
|
"end": 1681, |
|
"text": "(Rios and Kavuluru, 2017)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We hypothesize that taking ordinal rankings into account would provide more consistent and better results. To test this, we experimented on 3 common benchmark datasets for sentiment analysis, and 1 additional dataset on Twitter specificity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Movie Reviews (MR): We used the dataset named \"scale dataset v1.0\" (Pang and Lee, 2005) . It contains full-length movie reviews from 4 authors on RottenTomatoes.com, and we used the 4-class variant, which is obtained by segmenting the ratings from the authors' normalized numerical scale into 4 ranks. Each author a, b, c, and d has 1770, 902, 1307 and 1027 reviews, respectively. The mean number of words per review for each author is 435, 374, 455 and 292, respectively, but the tail is long, with some reviews having 3k words or more. For this dataset, no test data was provided, so we report results on a test set from an 80%/20% train/test split of the dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 87, |
|
"text": "(Pang and Lee, 2005)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "IMDb: This dataset is titled IMDb Large Movie Dataset, and includes 50k movie reviews written by users on the website IMDb.com (25k train, 25k test) (Maas et al., 2011) . Each review had a rating of between 1-4 and 7-10 for low and high sentiment, respectively. Each movie in the dataset is reviewed less than 30 times and no movies reviewed in the training set also appears in the test set. For our experiments, we make use of the fine-grained labels, 1-4 and 7-10, which creates an 8-class classification problem. The reviews vary greatly by length, with some as short as 6 words and others up to 2.4k words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 168, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "SST-5: The SST-5 dataset is obtained from Socher et al. 2013, which consists of 215,154 unique phrases parsed from the corpus of 11,855 sentences (averaging 17 words each) from Pang and Lee 2005 . Each sentence is labeled by 3 annotators. The labeling interface utilizes a continuous sliding bar with guiding ticks indicating \"Very negative,\" \"Negative,\" \"Somewhat negative,\" \"Neutral,\" \"Somewhat positive,\" \"Positive,\" and \"Very positive.\" For the SST-5 fine-grained sentiment classification version, the slider responses are collapsed down to 5 ranks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 194, |
|
"text": "Pang and Lee 2005", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Specificity: This is a corpus of 7,267 tweets that were sampled by taking 2 tweets (excluding re-tweets) from users who have posted at least 4 tweets (3,665 users) (Gao et al., 2019) . The specificity annotations were based on a sliding-scale with 5 guiding options: \"1 -Very General,\" \"2 -General,\" \"3 -Specific,\" \"4 -Very Specific,\" and \"5 -Extremely Specific,\" where general refers to posts that do not make references to any specific person, object or event, and specific refers to posts that do. Each tweet was annotated by at least 5 workers on Amazon Mechanical Turk after filtering for low quality labels, with a resulting intra-class correlation coefficient of 0.575. In order to arrive at ordinal labels, we bin and collapse the specificity ratings from a continuous 1-5 scale down to ranks of 1, 2, 3, and 4, where each continuous numerical value, i, is rounded down, i.e. f loor(i), in order to reduce class imbalance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 182, |
|
"text": "(Gao et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "All our models share the same underlying architecture in order to minimize differences resulting from model parameterization and feature generation. Model sizes (i.e. # of parameters) differ for each dataset, as the average input sequence lengths are different (e.g. Tweets vs. movie reviews). In general, the model parameters of \"sequence length,\" \"embedding dimension,\" and \"feature size\" are scaled to the maximum number of tokens contained in any given input text from that particular dataset. Because both the MR and IMDb datasets contain reviews that are longer than typical pre-trained Transformer input sequence sizes, we leverage the \"Performer\" model architecture (Choromanski et al., 2021) for all models in order to accommodate these inputs without modifying the underlying model architecture. Finally, we train our models without any pre-training and without pre-trained embeddings so that we can have meaningful comparisons between the OR and MC methodologies with fewer confounding variables, as adding pre-training or pre-trained embeddings may change how effectively each methodology learns from the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 674, |
|
"end": 700, |
|
"text": "(Choromanski et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model architectures and parameters", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For MC models, we use the Performer architecture with cross-entropy loss. The ordinal labels are encoded as one-hot vectors of length K, where K is the number of classes, i.e. ratings. For OR models, we also use the Performer architecture with cross-entropy, but over K \u2212 1 classes, where each class represents a threshold decision of whether the rating is predicted to be greater than a value, e.g. is the rating greater than 2. To accommodate the OR loss function, which we adopt from COnsistent RAnk Logits (CORAL) (Cao et al., 2020) , we encode the ordinal target variables as vectors, v v v, with length N \u2212 1, and each index v k represents a binary indicator of rank threshold, i.e. v 1 = 1 if\u0177 1 > 0.5 and v 2 = 1 if y 2 > 0.5, etc. CORAL seeks to optimize the ordinal rank by penalizing misclassifications of thresholds, and it has theoretical guarantees for rank-consistency (rank-monotonicity), e.g. in order to predict 5 (i.e. >4), the model must also predict >1, >2, >3. This is achieved by allowing the K \u2212 1 binary thresholds to share the same weight parameters, W W W , but independent biases, b k . Specifically, we seek to minimize:", |
|
"cite_spans": [ |
|
{ |
|
"start": 518, |
|
"end": 536, |
|
"text": "(Cao et al., 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "L(W W W , b b b) = \u2212 N i=1 K\u22121 k=1 \u03bb k (log(\u0177 k i )y k i + log(1 \u2212\u0177 k i )(1 \u2212 y k i ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where N is the number of training examples, \u03bb k is the weight associated with the k th rank threshold, and\u0177", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "k i = \u03c3(g(x i , W ) + b k ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Here, \u03c3() is the logistic sigmoid function, x i is the input example, W W W are the model weight parameters, which are shared for all the binary rank thresholds, and b k is the independent bias unit for threshold at rank k. In optimizing this loss function, one can prove that the independent bias units will rank order and result in overall rank-monotonicity, i.e. b 1 \u2265 b 2 \u2265 b 3 , etc. (see Theorem 1 in Cao et al. 2020) . Due to this rankconsistency, a model prediction of\u0177 3 > 0.5 is always accompanied by both\u0177 2 > 0.5 and y 1 > 0.5. In other words, for v 3 = 1, both ", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 423, |
|
"text": "Cao et al. 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "v i = 0, argmin(v v v), or 5 if v i = 1 for all i in [1..4].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Loss functions and label encodings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For all experiments, we use existing train/test splits for the benchmark dataset. If no train/test splits exist, we generate a 80%/20% train/test split and randomly select 20% of the training split as a validation split for tuning model parameters and early-stopping. To determine the early-stopping point, we select the training epoch at the inflection point where a 5-epoch moving average of the validation loss no longer improves.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameter tuning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Our experiments with MC vs OR classifiers for benchmark datasets (Table 1 and Figure 1) show: 1) Performance, measured in accuracy and F1, varies depending on the underlying dataset, with IMDb and SST-5 favoring MC and MR and Specificity favoring OR. Also, accuracy and F1 are correlated across models and datasets. 2) In contrast, mean-squared error (MSE), Spearman's Rho, and K-L divergence consistently favor OR, with OR achieving the lowest MSE and K-L divergence and highest Spearman Rho across all datasets. Here, MSE could be replaced by mean absolute error (MAE) or root-mean-squared error (RMSE), both of which also select OR as the better methodology over MC (not shown). These observations suggest that typical model evaluation metrics such as accuracy and F1 score, frequently used in benchmarks and leaderboards for sentiment analysis (Ribeiro et al., 2016; Ruder, 2021; Barbieri et al., 2020) , may not successfully select the best performing models for classifying ordinal target variables. For all of our experiments, we determined that our OR models are significantly different from MC models, with OR and MC predictions producing statistically distinct distributions as determined by a paired t-test with p values less than 0.05.", |
|
"cite_spans": [ |
|
{ |
|
"start": 848, |
|
"end": 870, |
|
"text": "(Ribeiro et al., 2016;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 871, |
|
"end": 883, |
|
"text": "Ruder, 2021;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 906, |
|
"text": "Barbieri et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 87, |
|
"text": "(Table 1 and Figure 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "While the goal of our experiments is to compare OR vs MC in a baseline setting and not to challenge current state-of-the-art models, we provide our results against benchmarks on the sentiment analysis datasets in order to give additional context to our models' performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "MR: Benchmarks and leaderboards on this specific dataset are sparse, as most researchers have opted to use the sentence-level polarity dataset from the same authors. The bestperforming model we have found is from Bickerstaffe and Zukerman 2010, which achieved author-level accuracies of 65.72%, 52.89%, 66.99%, and 51.87%, respectively (from 10-fold CV). This, on average, out-performed our bestperforming OR model, which achieved authorlevel accuracies of 62.12%, 54.78%, 54.85%, and 52.29%, respectively (from 20% test split). Specifically, Bickerstaffe and Zukerman 2010 out-performed on the majority-authors a and c while OR had a slight edge in the minorityauthors b and d. OR out-performed the original models shown in Pang and Lee 2005.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "IMDb: There is a lack of available benchmark data on the fine-grained 8-class version of the IMDb dataset, as most researchers opt to experiment on collapsing the labels to a binary classification problem. We obtain 34.98% and 31.19% accuracies for MC and OR, respectively on the fine-grained 8-class task. In an attempt to compare our result with previous work on the binary task, we collapse our predictions into binary format, by mapping predictions 1-4 to 0 and 7-10 to 1. With binary-mapping, we obtain 86.71% and 79.93% accuracies for OR and MC. The OR binary-mapped accuracy is on-par with previous benchmark results on binary IMDb predictions with vanilla CNNs and LSTMs (Camacho-Collados and Pilehvar, 2018) . Interestingly, OR outperforms MC for the binary-mapped accuracy while the 8-class accuracy favors MC. This further suggests that OR is more effective at minimizing rank-error compared to MC, as errors in the binary-mapped case represent greater magnitudes than in the 8-class case. SST-5: Our OR and MC models obtained 31.27% and 33.21% accuracy, respectively. This is substantially lower than baseline results in the literature, which typically achieve >40% accuracy (Socher et al., 2013) , with current SOTA in the 50%'s (Khodak et al., 2018) . The most comparable model from the literature is the \"VecAvg\" variant in the original SST-5 paper, which is a word-embedding model that has fine-grained accuracy of 32.7% (Socher et al., 2013) . The poor performance exhibited by our OR and MC models on SST-5 might be due to a couple factors: 1) our omitting pre-training in the form of a pre-trained language model or word embeddings, whereas typical protocols for SST-5 train embedding representations on additional data (Khodak et al., 2018) or on the sub-phrases in the SST-5 training set (Socher et al., 2013; Le and Mikolov, 2014) , 2) the Performer architecture may not be ideal for modeling very short input sequences, as it was designed to approximate the Attention matrix in order to have favorable time and memory scaling for long input sequences (Choromanski et al., 2021) , and 3) 56% of the words in the SST-5 training examples appear only once, which makes pre-training in the form of language models or word embeddings especially important for improving model performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 679, |
|
"end": 716, |
|
"text": "(Camacho-Collados and Pilehvar, 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1187, |
|
"end": 1208, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1242, |
|
"end": 1263, |
|
"text": "(Khodak et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1437, |
|
"end": 1458, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1739, |
|
"end": 1760, |
|
"text": "(Khodak et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1809, |
|
"end": 1830, |
|
"text": "(Socher et al., 2013;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1831, |
|
"end": 1852, |
|
"text": "Le and Mikolov, 2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 2074, |
|
"end": 2100, |
|
"text": "(Choromanski et al., 2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Specificity: This dataset is relatively new, and to our knowledge, there has not been any additional published work using this dataset for benchmarking. In addition, it is difficult to compare directly with the authors' results, as they used continuous target values and trained a Support Vector Regression model (Gao et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 313, |
|
"end": 331, |
|
"text": "(Gao et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For our use case of predicting NPS, we greatly value the accuracy of predicted survey ratings not only in absolute class agreement, but also in how closely the predicted ratings distribution matches the actual distribution. For accurate NPS prediction, our model rating distribution needs to reflect the actual distribution of NPS not just in aggregate, but across various business segments. To probe deeper into the performance differences between OR and MC, we examined the predicted distributions of Movie Ratings compared to ground truth.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Overall distributions for the MR and IMDb datasets ( Figure 2) show: 1) OR produces more accurate rating distributions, as measured by smoothed K-L divergence (Table 1) . 2) MC over-predicts majority classes in both datasets (2s and 3s for MR and 1s and 10s for IMDb) while under-predicting the others (except 2s and 3s in IMDb). These results are in line with the common observation that MC models tend to overfit on the majority classes in im- balanced datasets, which motivates the use of \"oversampling\" or class balancing (Buda et al., 2018; Chawla et al., 2002; Tepper et al., 2020; Gao et al., 2020) . OR, in contrast, provides a better fit for MR (slightly under-predicting for 1s), but significantly under-predicts on IMDb majority classes, displaying a much flatter distribution of predictions. 3) Lastly, in MC, for the IMDb distribution, where there are a greater number of classes (8 ratings considered), we observe a tendency to \"drop\" or ignore a particular class resulting in significant under-prediction, for example the 9s in the bottom right of Figure 2 . We observe this throughout training, and, depending on the epoch, have observed the model dropping other minority classes (2's, 3's, 6's, etc.), where we observe a recall less than 3%. This suggests that MC and OR lead to different behaviors with respect to predictive representation, as we discuss later.", |
|
"cite_spans": [ |
|
{ |
|
"start": 526, |
|
"end": 545, |
|
"text": "(Buda et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 566, |
|
"text": "Chawla et al., 2002;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 587, |
|
"text": "Tepper et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 605, |
|
"text": "Gao et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 62, |
|
"text": "Figure 2)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 168, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1063, |
|
"end": 1071, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To explore the performance of our model with respect to different data subsets, we calculate the smoothed K-L divergence for each of the four authors in the MR dataset ( Figure 3) . We find that OR greatly out-performs MC: while there are modest improvements in K-L divergence for authors b and d, we observe a two-to ten-fold increase for authors a and c, respectively. We hypothesize that MC is learning associations between review text and ratings as a whole by optimizing for overall accuracy in the predicted ratings, which may resemble learning how an amalgamation (or weighted average) of the 4 authors would rate a given movie. On the other hand, OR appears more sensitive to author-specific language, resulting in far lower K-L divergence values, but this may simply be due to the lower overall K-L divergence in OR calculated over the entire dataset. Lower overall and author-specific K-L divergence may be beneficial for applications with personalized predictions, i.e. in cases where performance on various data segments is important. Notably, authors a and c are the majority segments of the MR dataset, where each author a, b, c, and d has 1770, 902, 1307 and 1027 reviews, respectively. This may partially explain the improved K-L divergence of OR compared to MC for those segments, as there are more training examples for how authors a and c express their opinions on movie ratings. Table 1 showed that raw accuracy is insufficient to distinguish better performance when distribution fit and error distance are important. In the SemEval-2016 Task for 5-point sentiment analysis of Twitter posts, some contributors used macro-averaged mean absolute error (M AE M ) (Rosenthal et al., 2017) , which performs a macroaveraging of absolute errors between prediction and targets across all classes. This metric breaks down the error across target classes and can be better-suited for cases of class imbalance. While M AE M , MSE, and correlation are all error-sensitive, they do not give Figure 4 : To visualize the impact of optimizing for ordinal rank, we define \"accuracy at n\" or a @ n, which considers the closeness of predictions to truth. a @ n shows that for all benchmark datasets, OR outperforms MC for n > 0, where n is the absolute error.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1681, |
|
"end": 1705, |
|
"text": "(Rosenthal et al., 2017)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 170, |
|
"end": 180, |
|
"text": "Figure 3)", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1400, |
|
"end": 1407, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1999, |
|
"end": 2007, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "fine-grained insights into the degree of errors, e.g. how close did we get to the right answer? To show this, we extend M AE M and MSE with a metric that allows us to visualize degree of error: \"accuracy at n\" or a @ n (Figure 4) . a @ n calculates the proportion of predictions that are within an allowable absolute error, n (ranging from 0 to K \u2212 1), from their targets, y. In this metric, a @ 0 represents traditional accuracy. Accordingly, the graphs in Figure 4 begin at the values reported in Table 1 . However, as we increase n, we observe that across the board the a @ n for n > 0 is higher for OR compared to MC for all datasets. In other words, while MC predicts the exact rating correctly more often than OR for IMDb and SST-5, when OR predicts incorrectly, it generally gets closer to the target than MC. This is highly desirable for tasks where the degree of error is important.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 229, |
|
"text": "(Figure 4)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 466, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 506, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset benchmarks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We have shown that while MC outperformed OR for some datasets in terms of accuracy and F1, OR is significantly better than MC for minimizing error between predictions and targets for all datasets, as revealed by error-sensitive metrics such as mean-squared error and Spearman Rho. This can lead to better performance in terms of representing distributions, as measured by smoothed K-L divergence, and min-imizing the magnitude of errors, as shown by a @ n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We attribute OR's superiority over MC on MSE and Spearman correlation to the difference in their loss functions. While both MC and OR utilize cross-entropy, the label encodings and model constraints account for major differences. MC assumes that each rating is independent and uses one-hot encodings. OR encodes the ratings as rank thresholds, v, where each index, v i , represents a binary indicator of threshold, i.e. v 1 = 1 if\u0177 1 > 0.5 and v 2 = 1 if\u0177 2 > 0.5, etc. This encoding combined with shared weight parameters and independent biases enforces rank-consistency, meaning a model prediction of\u0177 3 > 0.5 is accompanied by both\u0177 2 > 0.5 and\u0177 1 > 0.5 because b 1 \u2265 b 2 \u2265 b 3 (Cao et al., 2020) . Consequently, we have not observed rank-inconsistent predictions in our OR models. This rank-consistency places a constraint on the model, forcing it to learn the ordinal information separating different ratings. In terms of bias-variance tradeoff, MC results in a lower bias, higher variance model, while OR produces a higher bias, lower variance model for ordinal target variables. Therefore, OR optimizes for rank-error between prediction and target, leading to lower MSE and higher Spearman correlations compared to MC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 681, |
|
"end": 699, |
|
"text": "(Cao et al., 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Despite not directly optimizing for raw accuracy, we observe that OR still outperforms MC in accuracy and F1 for the MR and Specificity datasets. We hypothesize that this is related to the independence assumption that MC places during training, which may omit useful ordinal signal. While Likert-like scales can be highly subjective and inconsistent from one reviewer to another (Liang et al., 2020) due to cultural differences (Lee et al., 2002) , among other reasons, there is significant overlap in rating schemes among reviewers (e.g. all reviewers should agree that higher ratings are better than lower). This standardization is particularly apparent when ratings are consistently generated, as is the case with MR, where all reviews originate from four authors. Each author, while unique, has a self-consistent way of expressing their reviews, which allows their readers to understand their reasoning for assigning movie ratings. This intra-author consistency may en-hance the ordinal signal contained in the MR dataset. The Specificity dataset provides a related explanation. The ranks in the Specificity and MR datasets derive from labeled ratings that were collapsed from a more-continuous scale into 4 ranks. This may help to reduce the variance among the annotators. Additionally, the creators of the Specificity dataset took care to ensure that annotators agreed with oneanother, assigning multiple annotators to label each data point (Gao et al., 2019) . OR may also benefit from having fewer ranks, as more ranks create more opportunities for inconsistencies among reviewers as to what each rank represents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 399, |
|
"text": "(Liang et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 428, |
|
"end": 446, |
|
"text": "(Lee et al., 2002)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1447, |
|
"end": 1465, |
|
"text": "(Gao et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The complement to our previous observation is that MC outperforms OR in accuracy and F1 for IMDb and SST-5. For IMDb, we hypothesize that this is related to the granularity of the ranks (8 total classes), and the large number of reviewers (likely tens of thousands). In this case, each person's different notions of ratings introduces considerable noise into the ordinal signal. It seems that OR may be more sensitive to rater inconsistencies compared to MC because OR fits the model on the ordinal signal whereas MC makes a rank-independence assumption. For SST-5, the answer is less clear, as each sentence is annotated by 3 judges, and the labels are collapsed down to 5-classes from a continuous range obtained from a sliding bar. It is possible that OR may struggle to make the generalizations necessary to successfully distinguish different ranks when the input sequences are short, as words may not appear in more than one example. For SST-5, 56% of the words in the training split appear in only one sentence. This may create difficulties in learning to generalize across ranks. It also highlights the impact of pre-training either in the form of language models or word embeddings for performance (Khodak et al., 2018) . This pitfall is not apparent in the Specificity dataset, and we hypothesize that it is because the specificity task has significant correlations with the Tweet length itself (Gao et al., 2019) , so that learning word associations is less important.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1206, |
|
"end": 1227, |
|
"text": "(Khodak et al., 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1404, |
|
"end": 1422, |
|
"text": "(Gao et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In addition to rank-error minimization, OR avoids the class dropping problem we observe in MC. We hypothesize that MC drops classes because probability mass is shared among all classes via the Softmax activation. This results in a model bias to take probability mass away from minority classes to give to majority classes when it is uncertain, which we observe in MC over-predicting majority classes ( Figure 2 ). This is likely to happen in cases where the model cannot find reliable signal for particular minority class(es). In the extreme case, this leads to nearly no predictions for those minority classes, i.e. class dropping. OR, in contrast, avoids class dropping because its predictions do not share probability mass, i.e. each rank threshold represents its own prediction after Sigmoidal activation, without a Softmax. This builds inherent robustness into the prediction. For example, if the model is quite confident that a review is higher than 2 and less than 5, i.e.\u0177 2 >> 0.5 and\u0177 5 << 0.5, but unsure if it is a 3 or a 4, i.e.\u0177 3 \u2248 0.5 and\u0177 4 \u2248 0.5, rather than splitting probability mass between 3 and 4 as in MC (thus making other classes more likely in relation via Softmax), it can adjust the probability mass of v 3 independent of the other thresholds by adjusting b 3 . For inputs near the decision boundary for both v 3 and v 4 , the model will predict 3 and 4 in roughly equal proportions, avoiding a drop of either rank, whereas for MC, the model's bias towards majority classes coupled with Softmax activation may lead to dropping 3 or 4. Therefore, OR may produce better results in tasks with data imbalance, e.g. with highly-skewed or bi-/multi-modal distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 411, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "While our results are only empirically demonstrated for OR implemented with CORAL, we expect that our observations would likely generalize to other OR implementations, especially latent-variable models and other variants that produce rank-ordered thresholds, as the salient features would be the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Common model evaluation metrics such as accuracy, precision, recall, and F1 are insufficient for capturing the degree of error between prediction and target for multi-class prediction where the target is ordinal. Therefore, selecting models based on these traditional metrics may result in selecting an underperforming model. Error-sensitive metrics such as MSE (similarly, MAE and RMSE) and Spearman correlation capture the ordinal error, resulting in selecting models that have more representative distributions and improved generalization across data segments, as measured by K-L divergence. This is particularly important in use cases like predictive NPS, where accurate scores are necessary not just in aggregate, but across time and different customer segments as well. The independence assumption made by MC can remove useful ordinal signal, especially in cases where there is greater consistency across reviewers, their language, or their ratings. For benchmarks involving ordinal target variables, it is important to evaluate the MSE (or a similar error-sensitive metric like MAE, a @ n, and M AE M ) and Spearman correlations in addition to the usual metrics in determining whether a new model outperforms previous models. We hope to see these metrics included in future benchmarks with ordinal target variables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Additionally, some of our experimental observations appear to be tied to the Softmax activation used in MC compared to the Sigmoidal activation in OR, such as class dropping in MC and substantially lower author-level K-L divergence in OR. These observations motivate exploration into alternative losses and activations for ordinal problems compared to traditional classification. For example, it would be interesting to compare the MC approach to one that does not involve the Softmax, such as \"One-vs-Rest\" or \"One-vs-One,\" to observe whether class dropping persists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Likert scales and data analyses", |
|
"authors": [ |
|
{ |
|
"first": "Elaine", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Seaman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elaine Allen and Christopher A. Seaman. 2007. Likert scales and data analyses. Quality Progress.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Evaluation measures for ordinal regression", |
|
"authors": [ |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Baccianella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Esuli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Ninth International Conference on Intelligent Systems Design and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "283--287", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2009. Evaluation measures for ordi- nal regression. In Proceedings of the 2009 Ninth International Conference on Intelligent Systems Design and Applications, pages 283-287, USA. IEEE Computer Society.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Predicting factuality of reporting and bias of news media sources", |
|
"authors": [ |
|
{ |
|
"first": "Ramy", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgi", |
|
"middle": [], |
|
"last": "Karadzhov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitar", |
|
"middle": [], |
|
"last": "Alexandrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.01765" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramy Baly, Georgi Karadzhov, Dimitar Alexan- drov, James Glass, and Preslav Nakov. 2018. Pre- dicting factuality of reporting and bias of news media sources. arXiv preprint arXiv:1810.01765.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Tweeteval: Unified benchmark and comparative evaluation for tweet classification", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Barbieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonardo", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa-Anke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1644--1650", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Leonardo Neves, and Luis Espinosa-Anke. 2020. Tweeteval: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1644-1650. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A hierarchical classifier applied to multi-way sentiment detection", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Bickerstaffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingrid", |
|
"middle": [], |
|
"last": "Zukerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "62--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adrian Bickerstaffe and Ingrid Zukerman. 2010. A hierarchical classifier applied to multi-way sen- timent detection. In Proceedings of the 23rd International Computational Linguistics (Coling 2010), pages 62-70. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A systematic study of the class imbalance problem in convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Mateusz", |
|
"middle": [], |
|
"last": "Buda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsuto", |
|
"middle": [], |
|
"last": "Maki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maciej", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mazurowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Neural Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mateusz Buda, Atsuto Maki, and Maciej A. Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, pages 249-259.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "On the role of text preprocessing in neural network architectures: An evaluation study on text categorization and sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-Collados", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Taher Pilehvar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jose Camacho-Collados and Mohammad Taher Pile- hvar. 2018. On the role of text preprocessing in neural network architectures: An evaluation study on text categorization and sentiment analy- sis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 40-46. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Classification efficiency of multinomial logistic regression relative to ordinal logistic regression", |
|
"authors": [ |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allan", |
|
"middle": [], |
|
"last": "Donner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "84", |
|
"issue": "406", |
|
"pages": "587--591", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M Karen Campbell and Allan Donner. 1989. Clas- sification efficiency of multinomial logistic re- gression relative to ordinal logistic regression. Journal of the American Statistical Association, 84(406):587-591.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Rank consistent ordinal regression for neural networks with application to age estimation", |
|
"authors": [ |
|
{ |
|
"first": "Wenzhi", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vahid", |
|
"middle": [], |
|
"last": "Mirjalili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Raschka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Pattern Recognition Letters", |
|
"volume": "140", |
|
"issue": "", |
|
"pages": "325--331", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenzhi Cao, Vahid Mirjalili, and Sebastian Raschka. 2020. Rank consistent ordinal regres- sion for neural networks with application to age estimation. Pattern Recognition Letters, 140:325- 331.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Smote: Synthetic minority over-sampling technique", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Nitesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Chawla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Bowyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Philip" |
|
], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kegelmeyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "321--357", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. Smote: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321- 357.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chang", |
|
"middle": [], |
|
"last": "Ming-Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lee", |
|
"middle": [], |
|
"last": "Kenton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toutanova", |
|
"middle": [], |
|
"last": "Kristina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "41710--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 41710-4186. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "A simple approach to ordinal classification", |
|
"authors": [ |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Machine Learning: ECML 2001", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eibe Frank and Mark Hall. 2001. A simple approach to ordinal classification. In De Raedt L., Flach P. (eds) Machine Learning: ECML 2001., volume 2167. ECML 2001. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Setconv: A new approach for learning from imbalanced data", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi-Fan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charu", |
|
"middle": [], |
|
"last": "Aggarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Latifur", |
|
"middle": [], |
|
"last": "Khan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1284--1294", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Gao, Yi-Fan Li, Yu Lin, Charu Aggarwal, and Latifur Khan. 2020. Setconv: A new approach for learning from imbalanced data. In Proceed- ings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1284-1294. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Predicting and analyzing language specificity in social media posts. Association for the Advancement of Artificial Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Preotiuc-Pietro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junyi Jessy", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yifan Gao, Yang Zhong, Daniel Preotiuc-Pietro, and Junyi Jessy Li. 2019. Predicting and ana- lyzing language specificity in social media posts. Association for the Advancement of Artificial In- telligence.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "chapter Large margin rank boundaries for ordinal regression", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Graepel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Obermayer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Large Margin Classifiers", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T Graepel and K Obermayer. 1999. Advances in Large Margin Classifiers, volume 7, chapter Large margin rank boundaries for ordinal regres- sion. The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Ordinal regression methods: survey and experimental study", |
|
"authors": [ |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Antonio Guti\u00e9rrez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mar\u00eda", |
|
"middle": [], |
|
"last": "P\u00e9rez-Ortiz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "S\u00e1nchez-Monedero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez-Navarro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u00e9sar", |
|
"middle": [], |
|
"last": "Herv\u00e1s-Mart\u00ednez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "127--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Antonio Guti\u00e9rrez, Mar\u00eda P\u00e9rez-Ortiz, Javier S\u00e1nchez-Monedero, Francisco Fern\u00e1ndez- Navarro, and C\u00e9sar Herv\u00e1s-Mart\u00ednez. 2016. Ordi- nal regression methods: survey and experimental study. IEEE Transactions on Knowledge and Data Engineering, 28(1):127-146.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A challenge dataset and effective models for aspect-based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Qingnan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruifeng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Ao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6280--6285", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment anal- ysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6280-6285. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Likert scale: Explored and explained", |
|
"authors": [ |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saket", |
|
"middle": [], |
|
"last": "Kale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satish", |
|
"middle": [], |
|
"last": "Chandel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D Kumar", |
|
"middle": [], |
|
"last": "Pal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "British Journal of Applied Science & Technology", |
|
"volume": "7", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankur Joshi, Saket Kale, Satish Chandel, and D Ku- mar Pal. 2015. Likert scale: Explored and ex- plained. British Journal of Applied Science & Technology, 7(4):396.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "The multilingual amazon reviews corpus", |
|
"authors": [ |
|
{ |
|
"first": "Phillip", |
|
"middle": [], |
|
"last": "Keung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4563--4568", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phillip Keung, Yichao Lu, Gy\u00f6rgy Szarvas, and Noah A. Smith. 2020. The multilingual amazon reviews corpus. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4563-4568. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A la carte embedding: Cheap but effective induction of semantic feature vectors", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Khodak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikunj", |
|
"middle": [], |
|
"last": "Saunshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingyu", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tengyu", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brandon", |
|
"middle": [], |
|
"last": "Stewart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, and Sanjeev Arora. 2018. A la carte embedding: Cheap but effective induction of semantic feature vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 12-22. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Treating ordinal scales as interval scales: an attempt to resolve the controversy", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Knapp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Nursing research", |
|
"volume": "39", |
|
"issue": "2", |
|
"pages": "121--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas R Knapp. 1990. Treating ordinal scales as interval scales: an attempt to resolve the contro- versy. Nursing research, 39(2):121-123.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The seven deadly sins of statistical analysis", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Kuzon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Urbanchek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Mccabe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Annals of plastic surgery", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "265--272", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Kuzon, Melanie Urbanchek, and Steven McCabe. 1996. The seven deadly sins of statis- tical analysis. Annals of plastic surgery, 37:265- 272.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 31 st International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31 st International Conference on Machine Learning, 32. JMLR.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Cultural differences in responses to a likert scale", |
|
"authors": [ |
|
{ |
|
"first": "Jerry", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patricia", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshimitsu", |
|
"middle": [], |
|
"last": "Mineyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinwei", |
|
"middle": [ |
|
"Esther" |
|
], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Research in Nursing and Health", |
|
"volume": "25", |
|
"issue": "4", |
|
"pages": "295--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jerry W. Lee, Patricia S. Jones, Yoshimitsu Mineyama, and Xinwei Esther Zhang. 2002. Cul- tural differences in responses to a likert scale. Research in Nursing and Health, 25(4):295-306.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Beyond user self-reported likert scale ratings: a comparison model for automatic dialog evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Weixin", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1363--1374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weixin Liang, James Zou, and Zhou Yu. 2020. Be- yond user self-reported likert scale ratings: a comparison model for automatic dialog evalua- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1363-1374. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Lin- guistics: Human Language Technologies, pages 142-150. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "An empirical examination of the likert scale: Some assumptions, development and cautions", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Chester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mccall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "annual meeting of the CERA Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chester H McCall. 2001. An empirical examination of the likert scale: Some assumptions, develop- ment and cautions. In annual meeting of the CERA Conference, South Lake Tahoe, CA.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment catego- rization with respect to rating scales. In Proceed- ings of the 43rd Annual Meeting of the Associ- ation for Computational Linguistics (ACL'05), pages 115-124. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The one number you need to grow", |
|
"authors": [ |
|
{ |
|
"first": "Frederick", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Reichheld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Harvard Business Review", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frederick F. Reichheld. 2003. The one number you need to grow. Harvard Business Review.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Sentibench -a benchmark comparison of state-of-the-practice sentiment analysis methods", |
|
"authors": [ |
|
{ |
|
"first": "Filipe", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matheus", |
|
"middle": [], |
|
"last": "Ara\u00fajo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pollyanna", |
|
"middle": [], |
|
"last": "Gon\u00e7alves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EPJ Data Science", |
|
"volume": "5", |
|
"issue": "23", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Filipe N. Ribeiro, Matheus Ara\u00fajo, Pollyanna Gon\u00e7alves, Marcos Andr\u00e9 Gon\u00e7alves, and Fabr\u00ed- cio Benevenuto. 2016. Sentibench -a bench- mark comparison of state-of-the-practice sen- timent analysis methods. EPJ Data Science, 5(23).", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Ordinal convolutional neural networks for predicting rdoc positive valence psychiatric symptom severity scores", |
|
"authors": [ |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Rios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Kavuluru", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of biomedical informatics", |
|
"volume": "75", |
|
"issue": "", |
|
"pages": "85--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anthony Rios and Ramakanth Kavuluru. 2017. Or- dinal convolutional neural networks for predict- ing rdoc positive valence psychiatric symptom severity scores. Journal of biomedical informat- ics, 75:S85-S93.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Semeval-2017 task 4: Sentiment analysis in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noura", |
|
"middle": [], |
|
"last": "Farra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th international workshop on semantic evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "502--518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analy- sis in twitter. In Proceedings of the 11th in- ternational workshop on semantic evaluation (SemEval-2017), pages 502-518.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Nlp-progress: Sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder. Nlp-progress: Sentiment analysis [online]. 2021.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Twitter sentiment analysis based on ordinal regression", |
|
"authors": [ |
|
{ |
|
"first": "Elbagir", |
|
"middle": [], |
|
"last": "Shihab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Saad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE Access", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "163677--163685", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shihab Elbagir Saad and Jing Yang. 2019. Twitter sentiment analysis based on ordinal regression. IEEE Access, 7:163677-163685.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "From argumentation mining to stance classification", |
|
"authors": [ |
|
{ |
|
"first": "Parinaz", |
|
"middle": [], |
|
"last": "Sobhani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Inkpen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stan", |
|
"middle": [], |
|
"last": "Matwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From argumentation mining to stance clas- sification. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 67-77.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP 2013)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Conference on Em- pirical Methods in Natural Language Processing (EMNLP 2013), pages 1631-1642. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Ateret Anaby Tavor, and Boaz Carmeli. 2020. Balancing via generation for multi-class text classification improvement", |
|
"authors": [ |
|
{ |
|
"first": "Naama", |
|
"middle": [], |
|
"last": "Tepper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Esther", |
|
"middle": [], |
|
"last": "Goldbraich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naama", |
|
"middle": [], |
|
"last": "Zwerdling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kour", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1440--1452", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naama Tepper, Esther Goldbraich, Naama Zw- erdling, George Kour, Ateret Anaby Tavor, and Boaz Carmeli. 2020. Balancing via generation for multi-class text classification improvement. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1440-1452. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Ordinal commonsense inference", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Rudinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "379--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common- sense inference. Transactions of the Association for Computational Linguistics, 5:379-395.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Across datasets, OR outperforms MC on MSE (purple; lower is better) and Spearman Rho (blue). The results are mixed for Accuracy (gray) and Weighted-F1 (red). v 2 = 1 and v 1 = 1 are also true, and we encode a rating of 4 out of 5 as v v v =[1110]. The model prediction, p, is then the first index at which", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "OR (left) outperforms MC (right) with respect to representing whole distributions of movie ratings for MR (top) and IMDb (bottom) datasets as measured by divergence.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "OR outperforms MC across authors in the MR dataset as measured by smoothed K-L divergence (lower is better). This is particularly evident in author c (10x) and author a (2x).", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Experimental results for the benchmark datasets comparing OR to MC. Bolded numbers represent higher values except MSE and K-L divergence, where lower is better.", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |