Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R13-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:56:07.030266Z"
},
"title": "Weighted maximum likelihood as a convenient shortcut to optimize the F-measure of maximum entropy classifiers",
"authors": [
{
"first": "Georgi",
"middle": [],
"last": "Dimitroff",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ontotext AD",
"location": {
"settlement": "Sofia",
"country": "Bulgaria"
}
},
"email": "[email protected]"
},
{
"first": "Laura",
"middle": [],
"last": "Tolo\u015fi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ontotext AD",
"location": {
"settlement": "Sofia",
"country": "Bulgaria"
}
},
"email": "[email protected]"
},
{
"first": "Borislav",
"middle": [],
"last": "Popov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ontotext AD",
"location": {
"settlement": "Sofia",
"country": "Bulgaria"
}
},
"email": "[email protected]"
},
{
"first": "Georgi",
"middle": [],
"last": "Georgiev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ontotext AD",
"location": {
"settlement": "Sofia",
"country": "Bulgaria"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We link the weighted maximum entropy and the optimization of the expected F \u03b2measure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F \u03b2-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F \u03b2-measure.",
"pdf_parse": {
"paper_id": "R13-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We link the weighted maximum entropy and the optimization of the expected F \u03b2measure, by viewing them in the framework of a general common multi-criteria optimization problem. As a result, each solution of the expected F \u03b2-measure maximization can be realized as a weighted maximum likelihood solution-a well understood and behaved problem. The specific structure of maximum entropy models allows us to approximate this characterization via the much simpler class-wise weighted maximum likelihood. Our approach reveals any probabilistic learning scheme as a specific trade-off between different objectives and provides the framework to link it to the expected F \u03b2-measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In many NLP classification applications, the classes are not symmetric and the user has some preference towards a high Precision or Recall of a particular target class. Thus, appropriate tuning of the model is often necessary, depending on the particular tolerance of the application to false positive or false negative results. This preference can be expressed by requiring a large F \u03b2 measure for a particular \u03b2 describing the desired Precision/Recall trade-off. Ideally, the parameters of the linear model should be estimated such that a desired F \u03b2 measure is maximized. However, directly maximizing F \u03b2 is hard, due to its nonconcave shape.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Maximum likelihood-based classifiers such as the maximum entropy are relatively easy to fit, but they are rigid and cannot be tuned to a desired Precision and Recall trade-off. In this article, we consider a more flexible maximum entropy model, which optimizes a weighted likelihood function. If appropriate weights are chosen, then the maximum weighted likelihood model coincides with the optimal F \u03b2 model. The advantage of the weighted likelihood as a loss function is that it is concave and standard gradient methods can be used for its optimization. In fact an existing maximum entropy implementation can be easily generalized to the weighted case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, such a link between the maximum likelihood and the F \u03b2 has not been established before. The article is focused on the intuition of the relation and the sketch of the proof of the main result. We also present numerical experiment supporting the theoretical findings. Additional value of our theoretical observation is that it establishes the methodology of viewing a particular probabilistic model as a specific solution of a common multi-criteria optimization problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This article is organized as follows. In Section 2 we present related work, Sections 3 to 6 present the theoretical aspects of link between the weighted maxent and F measure. Section 7 introduces the algorithm, Section 8 explains the steps for evaluation of the algorithm, Section 9 presents the datasets. Sections 10 and 11 present aspects of performance of our method on the datasets and Section 12 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most popular heuristic for Precision-Recall trade-off is based on adjusting the acceptance threshold given by maximum entropy models (or any learning framework). However, this procedure amounts to a simple translation of the maximum likelihood hyperplane towards or away from the target class and does not fit the model anew.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The expected F measureF is also considered in (Nan et al., 2012) , where also its consistency is studied and even a Hoeffding bound for the convergence is given. However, the authors there mainly concentrate on the acceptance threshold to optimize the F -measure. (Dembczyn'ski et al., 2011 ) gave a general algorithm for F measure optimization for a particular parametrization involving m 2 + 1 parameters where m is the number of examples in the binary classification case. Determining the parameters of the models however can be very hard. A very interesting result in (Dembczyn'ski et al., 2011) is that in the worst case there is a lower bound on the discrepancy between the optimal solution and the solution obtained by means of optimal acceptance threshold, which further motivates our approach. In our approach we directly find the parameters of the model that maximize the expected F measure using the link to the weighted maximum likelihood. (Jansche, 2005) describe a maximum entropy model that optimizes directly an expected F \u03b2based loss. However the expected F \u03b2 is not concave and is rather cumbersome to deal with. Therefore the standard gradient methods do not guarantee optimality of the solution. (Minkov et al., 2006) introduce another heuristics, which is based on changing the weight of a special feature, which indicates if a sample is in the complementary class or not.",
"cite_spans": [
{
"start": 46,
"end": 64,
"text": "(Nan et al., 2012)",
"ref_id": null
},
{
"start": 264,
"end": 290,
"text": "(Dembczyn'ski et al., 2011",
"ref_id": "BIBREF2"
},
{
"start": 572,
"end": 599,
"text": "(Dembczyn'ski et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 952,
"end": 967,
"text": "(Jansche, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 1216,
"end": 1237,
"text": "(Minkov et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The weighted logistic regression is well known, see for example (Vandev and Neykov, 1998) , and the corresponding estimation is barely harder than in the standard case without weights. See also (Simeckov\u00e1, 2005) for an interesting discussion.",
"cite_spans": [
{
"start": 64,
"end": 89,
"text": "(Vandev and Neykov, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 194,
"end": 211,
"text": "(Simeckov\u00e1, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The maximum entropy modeling framework as introduced in the NLP domain by (Berger et al., 1996) has become the standard for various NLP tasks. To fix notations consider a training set of m",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "samples {(x(i), y(i)) : i \u2208 1, . . . m} where x(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "is a sample with class y(i), where y(i) takes values in some finite set Y. In this paper we aim at explaining the main idea of the link between the weighted maximum entropy and the expected F \u03b2 ; to keep things technically simple we restrict to the case |Y| = 2. Each observation is represented by a set of features {f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "j (x(i), y(i)) : j \u2208 1, . . . , N }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "The maximum entropy principle forces the model conditional probabilities p(y|x, \u03bb) to have the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "p(y|x, \u03bb) = 1 Z \u03bb (x) exp j \u03bb j \u2022 f j (x, y),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "where \u03bb \u2208 R N are the model parameters and Z \u03bb (x) is a normalization constant. The calibration of the model amounts to (see (Berger et al., 1996) ) maximizing the log-likelihood",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "l(\u03bb : x, y) = m i=1 log p(y(i)|x(i), \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "In the following for a weight vector w \u2208 R m we will make use of the weighted log-likelihood function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "l W (\u03bb : w, x, y) = m i=1 w(i) log p(y(i)|x(i), \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "In our case the weights will be defined mostly class-wise, i.e. examples from the same class will always have the same weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "4 Precision/Recall trade off. Expected F \u03b2 -measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "The performance of a classifier is typically measured using the Precision and Recall metrics, and in particular their tradeoff described by a constant \u03b2 \u2208 [0, 1] and expressed as the \u03b2-weighted harmonic mean called F \u03b2 -measure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "F \u03b2 := \u03b2 P + 1 \u2212 \u03b2 R \u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "The larger the \u03b2 the greater the influence of the Precision as compared to the Recall on the F \u03b2measure. The Precision and Recall are defined in terms of the true/false positive/negative counts. For a given example with attributes x the maximum entropy model will produce the conditional probabilities p(y|x, \u03bb) of the example being into one of the classes y \u2208 Y. When used for classification however, one would typically choose the class y(x) having the largest probability i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "y(x) = argmax y p(y|x, \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "This means that we would completely disregard the additional information incorporated into the model. A more probabilistic approach would be to draw the class y(x) randomly out of the model distribution given by the probability weights {p(y|x, \u03bb) : y \u2208 Y}. This way the classes y(x) as well as the true/false positive/negative counts would be random variables. However if we perform this sampling many times and take the average we will end up having the expected true/false positive/negative counts. For example the expected true positive and true negative counts are given b\u1ef9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "A u = E#true pos = i:y(i)=1 p(1|x(i), \u03bb); D u = E#true neg = i:y(i)=0 p(0|x(i), \u03bb) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Using the expected counts instead of the realized ones we can define the mean field approximatio\u00f1 P andR of the precision and recall metrics and consequently define the mean field approximatio\u00f1 F \u03b2 of the standard F \u03b2 measur\u1ebd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "F \u03b2 := \u03b2 P + 1 \u2212 \u03b2 R \u22121 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "As in (Jansche, 2005 ) with a slight abuse of notation we will callF \u03b2 the expected F \u03b2 measure. For a large training set and a good model the expected F \u03b2 measure on the training set will be close to the standard one since the model probabilities p(y(i)|x(i), \u03bb) will be close to one for the training examples.",
"cite_spans": [
{
"start": 6,
"end": 20,
"text": "(Jansche, 2005",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "5 Weighted maximum likelihood vs. expected F \u03b2 -measure maximization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Clearly the log-likelihood and the expected F \u03b2 measure are two different, however one would hope, not orthogonal objectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Intuitively every reasonable machine learning model would try to set the model parameters \u03bb in such a manner that for all training examples the model conditional probabilities of the observed classes y(i) given the example's attributes x(i), namely p(y(i)|x(i), \u03bb), are as large as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "In general if the used model is not overfitting it would not be possible for all conditional probabilities to be close to one simultaneously, and implicitly every particular model would handle the tradeoffs in its own manner. In this sense the important difference between the log-likelihood and the expected F \u03b2 measure seen as objective functions is that, while the log-likelihood approach gives equal importance to all training examples on the logarithmic scale the (expected) F \u03b2 measure has a parameter controlling this trade-off on a class-wise level. On the other hand, as noted in (Jansche, 2005) the flexibility inF \u03b2 comes at a price -th\u1ebd F \u03b2 is by far not that nice function to optimize as the log-likelihood is. The next proposition gives a useful link between theF \u03b2 and the weighted log-likelihood enabling us to findF \u03b2 optimizers by solving the very well behaved and understood weighted maximum likelihood problem.",
"cite_spans": [
{
"start": 589,
"end": 604,
"text": "(Jansche, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Proposition 1. Let\u03bb \u03b2 be the maximizer of the expected F \u03b2 measureF \u03b2 . Then there exists a vector of weights w(\u03b2) \u2208 R m such that\u03bb \u03b2 coincides with the weighted maximum likelihood estimator",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "\u03bb w(\u03b2) M L = arg max l W (\u03bb : w(\u03b2), x, y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Moreover, we can approximate the \u03b2-implied weights w(\u03b2) with a class-wise weight vector w(\u03b2) (i.e., the weights of training examples from the same class have the same weights) , that i\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "\u03bb \u03b2 =\u03bb w(\u03b2) M L and\u03bb \u03b2 \u2248\u03bbw (\u03b2) M L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "Below we give the intuition of the proof and some formal arguments, without presenting all technical details, due to lack of space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Maximum Entropy Model",
"sec_num": "3"
},
{
"text": "The proof makes use of multicriteria optimization techniques (Ehrgott, 2005) , which are typically applied when two or more conflicting objectives need to be optimized simultaneously. In our case, the number of true positives and the number of true negatives need to be maximized at the same time, but most classifiers (at least those that do not overfit badly) trade-off between them. The solutions of multicriteria optimization problem are called Pareto optimal solutions. A solution is Pareto optimal if none of the objectives can be improved without deteriorating at least one of the other objectives.",
"cite_spans": [
{
"start": 61,
"end": 76,
"text": "(Ehrgott, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "Intuitively, the maximum likelihood optimizes simultaneously the conditional probabilities p(y(i)|x(i), \u03bb) via implicitly setting some tradeoffs between them. Therefore our idea is to adjust these trade-offs using the weights in such a manner that theF \u03b2 is optimized rather than the likelihood. The most natural and general way to look at these trade-offs is to consider the multicriteria optimization problem (MOP) max{log p(y(1)|x(1), \u03bb), ..., log p(y(m)|x(m), \u03bb)}. It turns out that both the max likelihood and th\u1ebd F \u03b2 optimizer are particular solutions of the MOP. On the other hand all solutions of the MOP can be obtained by maximizing nonnegative linear combinations of the objectives (Ehrgott, 2005) . However a nonnegative combination of the objectives {log p(y(1)|x(1), \u03bb), ..., log p(y(m)|x(m), \u03bb} is precisely the weighted maximum entropy objective function.",
"cite_spans": [],
"ref_spans": [
{
"start": 693,
"end": 708,
"text": "(Ehrgott, 2005)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "Technically, for each \u03b2 theF \u03b2 maximizer\u03bb \u03b2 can actually be seen as an element of the Pareto optimal set of the multi-criteria optimization problem max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb {\u00c3(\u03bb),D(\u03bb)},",
"eq_num": "(2)"
}
],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "where\u00c3(\u03bb) andD(\u03bb) are the model expected true positive and true negative counts on the training set. This follows from the fact that we can rewrit\u1ebd F \u03b2 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "F \u03b2 (\u03bb) =\u00c3 (\u03bb) \u03b2(\u00c3(\u03bb) \u2212D(\u03bb)) + (1 \u2212 \u03b2)m 1 + \u03b2m 0 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "where m 1 is the total number of positive examples and m 0 the number of negative ones. Furthermore the Pareto optimal set of (2) is a subset of the Pareto optimal set of the finer granularity multi-criteria optimization problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "max \u03bb {p(y(1)|x(1), \u03bb), ..., p(y(m)|x(m), \u03bb)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "Clearly, because of the strict monotonicity of the logarithm the above optimization problem is equivalent to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "max \u03bb {log p(y(1)|x(1), \u03bb), ..., log p(y(m)|x(m), \u03bb)}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "(3) On the other hand each element of the Pareto optimal set of (3) can be realized as a weighted maximum likelihood estimator associated to some weight vector w \u2208 R m , which concludes the proof. The pass to approximate class-wise weights is achieved using a linearization of the logconditional probabilities of the training examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sketch of proof:",
"sec_num": null
},
{
"text": "Apart from the obvious technical generalization of the likelihood function the weights could on aver-age be interpreted as a modification of the training set by adding new examples with intensity w(i) while keeping the attributes and the classes (x(i), y(i)). In particular for w(i) < 1 the ith example is deleted with probability 1 \u2212 w(i). If w(i) > 1, say w(i) = q + w f (i) for some integer q \u2265 1 and 0 \u2264 w f (i) < 1 then generate q identical training examples (x(i), y(i)) and additionally clone it with probability w f (i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretation of the weights",
"sec_num": "6"
},
{
"text": "This view highlights yet another interpretation of the weights: an asymmetric regularization. Removing some examples when the weight is smaller than 1 is a well known regularization technique called drop-out. When it is applied to features involving only a subset of the classes then obviously it is an asymmetric regularization. The case of weights larger than 1 can be viewed in the same light by simple renormalization. If we have an exogenous L 2 regularization, adding class-wise weights would alter the influence of the regularization on the parameters corresponding to different classes, yet again we achieve an asymmetric regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretation of the weights",
"sec_num": "6"
},
{
"text": "We search for a value w in a predefined interval [w min , w max ] which gives maximum F \u03b2 (w). Our experiments on artificial and real data suggest that the expected F \u03b2 (w) is unimodal on intervals like [\u03b5, w max ], for a small \u03b5 close to zero. This suggests that a golden section search algorithm (Kiefer, 1953) can find the maximum efficiently, i.e. with a minimum number of trained weighted likelihood models.",
"cite_spans": [
{
"start": 298,
"end": 312,
"text": "(Kiefer, 1953)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "In practice however the estimate of F \u03b2 (w) may not be unimodal, because numerical methods are used for training weighted maximum entropy models and the optimal model is only approximately identified. It is safe to assume however that deviation from unimodality is not considerable, for example, we can accept that the function F \u03b2 (w) is \u03b4 -unimodal (as defined in (Brent, 1973) ) for some \u03b4. Then, (Brent, 1973) show that the golden section search approximates the location of the maximum with a tolerance of 5.236\u03b4.",
"cite_spans": [
{
"start": 366,
"end": 379,
"text": "(Brent, 1973)",
"ref_id": "BIBREF1"
},
{
"start": 400,
"end": 413,
"text": "(Brent, 1973)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "Below we describe the steps of the algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "8 Evaluation of the algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "In order to demonstrate that our algorithm is an efficient tool for optimizing the F \u03b2 measure, we performed the following tests, the results of which 9.2 Twitter sentiment corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "We used the Sanders Twitter Sentiment Corpus (http://www.sananalytics.com/lab/twittersentiment/), from which we filtered 3425 tweets, labeled as either positive, negative or neutral. We classified tweets that expressed a sentiment (either positive or negative), versus neutral tweets. The neutral tweets are about twice more than the positive and negative tweets together. For the experiments, we used 3081(90%) tweets for training and 343 (10%) for testing. We processed the tweets and obtained about 6095 features. In order to avoid overfitting and speed up computations, we used a filter method based on Information Gain to remove uninformative features. We kept 60 (10%) of the features for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The algorithm",
"sec_num": "7"
},
{
"text": "By varying the weight of the target class, the weighted maximum entropy achieves Precision-Recall trade-off. Figure 2 clearly illustrates the trade-off, for the synthetic data A and the twitter sentiment data. Additionally, note that Precision and Recall are in equilibrium for a a weight that reflects the ratio of the class cardinalities, namely w = 1 for the balanced synthetic dataset A and w = 2, for the twitter corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "The brute force method reveals the shape of the F \u03b2 (w), as a function of \u03b2 and w (see Figure 4 a ) and c)). Both of our datasets suggest that there is a critical value of w which marks a switch point in the monotony of the F \u03b2 (w) (regarded as a function of \u03b2). For w smaller than the critical switch, F \u03b2 (w) increases with \u03b2, and for w larger than the switch, F \u03b2 (w) decreases with \u03b2. This switch is probably directly related to the ratio of the class cardinalities and deserves further theoretical investigation.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 97,
"text": "Figure 4 a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "Figures 4 a) and c) show also the 'path' that marks the maximum F \u03b2 achievable for each \u03b2, in solid black line. The path corresponding to our golden search algorithm falls fairly close to that of the brute force, as shown by the dotted lines (marking the mean and one standard deviation to each side). Even if sometimes the optimal w is not found exactly by the golden search, the F \u03b2 is still very close to the optimum, as shown in Figures 4 b) and d). In fact, the optimum F \u03b2 is always within one standard deviation from the expected value of our golden search algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "Finally, we demonstrate that our method per- 11 Limits and merits of the weighted maximum entropy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "In this section we compare the weighted maximum entropy and the acceptance threshold method with the help of the two artificial data sets A and B shown on Figure 1 . The acceptance threshold corresponds to a translation of the separating hyperplane obtained by the standard maximum entropy model. We show that acceptance threshold fails to fit the data well for most values of \u03b2, if the data resemble more dataset A than dataset B. In contrast, the weighted maxent is more adaptive, fitting nicely both datasets for all values of \u03b2. It is rather clear that with translation we can achieve an optimal Precision/Recall trade-off for the synthetic data set B. Indeed, Figure 5 b) shows that the acceptance threshold and the weighted maximum entropy do result in virtually the same optimal F \u03b2 values.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": null
},
{
"start": 665,
"end": 676,
"text": "Figure 5 b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "The optimal Precision/Recall trade-off for dataset A however requires additional rotation/tilting of the separating hyperplane that cannot be produced by adjusting the acceptance threshold. In line with this intuition Figure 5 a) demonstrates that the weighted likelihood settles at a better Precision-Recall pairs and consequently results in larger F \u03b2 values.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 229,
"text": "Figure 5 a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "Clearly, in the general case the optimal shift of the separating plane is expected to have a rotation component that is unaccessible by simply adjusting the acceptance threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and results",
"sec_num": "10"
},
{
"text": "The main result of the paper is that the weighted maximum likelihood and the expected F \u03b2 measure are simply two different ways to specify a particular trade-off between the objectives of the same multi-criteria optimization problem. Technically we unify these two approaches by viewing them as methods to pick a particular point from the Pareto optimal set associated with a common multi-criteria optimization problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
},
{
"text": "As a consequence each expected F \u03b2 maximizer can be realized as a weighted maximum likelihood estimator and approximated via a class-wise weighted maximum likelihood estimator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
},
{
"text": "The presented results can be generalized to the regularized and multi-class case which is a subject for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
},
{
"text": "Furthermore, the proposed approach to view any probabilistic learning scheme as a specific trade-off between different objectives and thus to link it to the expected F \u03b2 measure is general and can be applied beyond the maximum entropy framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
},
{
"text": "The difficulty in exploiting the statement of Proposition 1 lies in the fact that it is not apriori clear how to choose the weights w(\u03b2) for a given \u03b2. In a larger paper the authors will present algorithms maximizing theF \u03b2 measure exploiting the theoretical results from this paper via adaptively finding the right weights. Even without a pre-cise estimate for the weights the presented results give the qualitative connection between the Precision/Recall trade-off and the weights: if one aims at higher Precision then smaller weights are appropriate and conversely larger Recall is achieved via larger weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
},
{
"text": "We showed with experiments on artificial and real data that using weighted maximum entropy we can achieve a desired Precision -Recall tradeoff. We also presented an efficient algorithm based on golden section search, that approximates well the class weights at which the maximum F \u03b2 is attained. We showed that on the test set, we achieve larger F \u03b2 than the simple maximum entropy baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "12"
}
],
"back_matter": [
{
"text": "Require: Unimodal function f , interval [a, b] Ensure:2: function GSS (f , a, b, p1, p2) 3:if |b \u2212 a| < \u03b5 then 4: return a 5: else 6:if f (p1) > f (p2) then 7:b \u2190 p2 8:p2 \u2190 p1 9:p1 \u2190 (2 \u2212 \u03c6)(b \u2212 a) 10: else 11:a \u2190 p1 12:p1 \u2190 p2 13:end if 15:return GSS(f, a, b, p1, p2) 16:end if 17: end functionare described in the Results section.First, we evaluated Precision and Recall at different values of the class weight w in the interval [0.1, 5] and show that they are antagonistic, which demonstrates that weighted maxent can trade-off Precision and Recall.Second, we show that our golden section search algorithm finds a good approximation of the optimum class-weight w, necessary for maximizing a specific F \u03b2 (w), despite the violation of the unimodality of F \u03b2 (w). We can identify the optimum weights by means of a brute-force approach, by which we try a large number of values for the weight of the target class (in practice, 50 values evenly distributed in [0.1, 5]). The brute-force is infeasible practical applications, because it requires training a large number of weighted maxent models. The comparison to the brute-force method is carried on the training set, because finding the appropriate class weight w is part of model fitting, together with the estimation of the model weights \u03bb.Third, we demonstrate that the models that we fit are superior (i.e. yield better test F \u03b2 ) than the maxent model. To this end, we compute F \u03b2 for a range of values of \u03b2 \u2208 [0, 1]. We compare these results with the test F \u03b2 that our algorithm delivers. For a reliable comparison, we also estimate the variance of the F \u03b2 values -both for our method and for the baseline -by training on 20 bootstrap samples of the training set instead of the original ",
"cite_spans": [
{
"start": 40,
"end": 46,
"text": "[a, b]",
"ref_id": null
},
{
"start": 70,
"end": 88,
"text": "(f , a, b, p1, p2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Golden Section Search",
"sec_num": null
},
{
"text": "We simulated two datasets, A and B, of 600 samples each of them with two equally populated classes and only two features. In dataset A the samples from class 0 are distributed asand N (\u00b5 B 1 , \u03a3 B 1 ) with \u00b5 B 0 = (0.5, 1), \u03a3 B 0 = (0.3, 0.3) I 2 , \u00b5 B 1 = (1, 0.8) and \u03a3 B 1 = (0.3, 0.3) I 2 . In Figure 1 we visualize both synthetic datasets. We used 400 of the samples for training and 200 for testing.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic datasets",
"sec_num": "9.1"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Comput. Linguist",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.L. Berger, V.J. Della Pietra, and S.A. Della Pietra. 1996. A maximum entropy approach to natural lan- guage processing. Comput. Linguist., 22(1):39-71.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for Minimization Without Derivatives",
"authors": [
{
"first": "R",
"middle": [
"P"
],
"last": "Brent",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. P. Brent. 1973. Algorithms for Minimization With- out Derivatives. Prentice-Hall, Inc., Englewood Cliffs, New Jersery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An exact algorithm for f-measure maximization",
"authors": [
{
"first": "Willem",
"middle": [],
"last": "Krzysztof Dembczyn'ski",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Waegeman",
"suffix": ""
},
{
"first": "Eyke",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "H\u00fc Llermeier",
"suffix": ""
}
],
"year": 2011,
"venue": "Neural information processing systems : 2011 conference book. Neural Information Processing Systems Foundation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Dembczyn'ski, Willem Waegeman, Weiwei Cheng, and Eyke H\u00fc llermeier. 2011. An exact algorithm for f-measure maximization. In Neural information processing systems : 2011 conference book. Neural Information Processing Systems Foun- dation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi Criteria Optimization",
"authors": [
{
"first": "Matthias",
"middle": [
"Ehrgott"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Ehrgott. 2005. Multi Criteria Optimization. Springer, Englewood Cliffs, New Jersery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Maximum expected F-measure training of logistic regression models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jansche",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT '05",
"volume": "",
"issue": "",
"pages": "692--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Jansche. 2005. Maximum expected F-measure training of logistic regression models. In HLT '05, pages 692-699, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sequential minimax search for a maximum",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kiefer",
"suffix": ""
}
],
"year": 1953,
"venue": "Proceedings of the American Mathematical Society",
"volume": "4",
"issue": "3",
"pages": "502--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Kiefer. 1953. Sequential minimax search for a max- imum. Proceedings of the American Mathematical Society, 4(3):502-506.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "NER systems that suit user's preferences: adjusting the recall-precision trade-off for entity extraction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Minkov",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tomasic",
"suffix": ""
},
{
"first": "W",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "93--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Minkov, R.C. Wang, A. Tomasic, and W.W. Cohen. 2006. NER systems that suit user's preferences: ad- justing the recall-precision trade-off for entity ex- traction. In Proceedings of NAACL, pages 93-96.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Optimizing f-measure: A tale of two approaches",
"authors": [],
"year": 2012,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Nan, Kian Ming Adam Chai, Wee Sun Lee, and Hai Leong Chieu. 2012. Optimizing f-measure: A tale of two approaches. In ICML.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Maximum weighted likelihood estimator in logistic regression",
"authors": [
{
"first": "M",
"middle": [],
"last": "Simeckov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Simeckov\u00e1. 2005. Maximum weighted likelihood estimator in logistic regression.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "About regression estimators with high breakdown point",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Vandev",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Neykov",
"suffix": ""
}
],
"year": 1998,
"venue": "Statistics",
"volume": "32",
"issue": "",
"pages": "111--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. L. Vandev and N. M. Neykov. 1998. About regres- sion estimators with high breakdown point. Statis- tics, 32:111-129.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Precision-Recall trade-off on the train set by changing class-weights: a) synthetic dataset A; b) sentiment tweeter dataset.forms very well on the test set, compared to the simple maxent baseline.Figure 3 a)and b) show that the test F \u03b2 is superior to the baseline, due to its ability to adapt the fitted model to the specific Precision -Recall trade-off, expressed by a value of \u03b2.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Test F \u03b2 for our method, compared to the maxent baseline. One standard deviation bars are added. a) synthetic data; b) twitter corpus.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Heatmap showing in grayscale the F \u03b2 (w) values obtained by the brute force method. The solid black line shows the optimal models for each beta. The dotted lines show the estimates given by the golden search: a) synthetic data; c) sentiment corpus. Comparison of the train F \u03b2 obtained with the brute force (solid line) and with the golden section search (dotted line, with standard deviation): b) synthetic data; d) sentiment corpus. Comparison of the acceptance threshold versus the weighted maximum likelihood on the stylized synthetic data: a) dataset A ; b) dataset B",
"num": null,
"uris": null
}
}
}
}