|
{ |
|
"paper_id": "E17-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:50:58.389795Z" |
|
}, |
|
"title": "Efficient Benchmarking of NLP APIs using Multi-armed Bandits", |
|
"authors": [ |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University Clayton", |
|
"location": { |
|
"postCode": "3800", |
|
"region": "VICTORIA", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tuan", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University Clayton", |
|
"location": { |
|
"postCode": "3800", |
|
"region": "VICTORIA", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Carman", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University Clayton", |
|
"location": { |
|
"postCode": "3800", |
|
"region": "VICTORIA", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Comparing NLP systems to select the best one for a task of interest, such as named entity recognition, is critical for practitioners and researchers. A rigorous approach involves setting up a hypothesis testing scenario using the performance of the systems on query documents. However, often the hypothesis testing approach needs to send a large number of document queries to the systems, which can be problematic. In this paper, we present an effective alternative based on the multi-armed bandit (MAB). We propose a hierarchical generative model to represent the uncertainty in the performance measures of the competing systems, to be used by Thompson Sampling to solve the resulting MAB. Experimental results on both synthetic and real data show that our approach requires significantly fewer queries compared to the standard benchmarking technique to identify the best system according to Fmeasure.", |
|
"pdf_parse": { |
|
"paper_id": "E17-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Comparing NLP systems to select the best one for a task of interest, such as named entity recognition, is critical for practitioners and researchers. A rigorous approach involves setting up a hypothesis testing scenario using the performance of the systems on query documents. However, often the hypothesis testing approach needs to send a large number of document queries to the systems, which can be problematic. In this paper, we present an effective alternative based on the multi-armed bandit (MAB). We propose a hierarchical generative model to represent the uncertainty in the performance measures of the competing systems, to be used by Thompson Sampling to solve the resulting MAB. Experimental results on both synthetic and real data show that our approach requires significantly fewer queries compared to the standard benchmarking technique to identify the best system according to Fmeasure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "F-measureAs new NLP systems are (continually) introduced for a task of interest, such as named entity recognition (NER), it is crucial for practioneers and researchers to select the best system. These systems may be designed based on different models and/or learning algorithms. For instance, due to recent advancement in NER research, several NER systems have been proposed and then supported in APIs such as OpenNLP (Ingersoll et al., 2013) , Stanford NER (Finkel et al., 2005) , ANNIE (Cunningham et al., 2002) and Meaning Cloud (MeaningCloud-LLC, 1998) to name a few.", |
|
"cite_spans": [ |
|
{ |
|
"start": 418, |
|
"end": 442, |
|
"text": "(Ingersoll et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 479, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 513, |
|
"text": "(Cunningham et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 556, |
|
"text": "(MeaningCloud-LLC, 1998)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Often, the competing NLP systems are benchmarked according to their average performance measure, e.g. F-measure capturing both Precision and Recall, across a set of example documents. Each document produces a single F-measure and the true performance of the system is considered to be the expected value across all possible documents from the domain. Performance on individual documents correspond to samples from the performance distribution of the system, and can then be used to determine the best system (or set of systems should the highest performing system not be unique) using rigorous hypothesis testing. However, this approach usually requires querying each competing system with a large number of documents, which can be problematic if either the number of test documents is limited, or the systems are implemented as APIs by a third party and performing each query incurs a cost.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present a statistically effective method to identify the best system from a pool of systems. Our approach requires significantly fewer example documents to reach similar guarantees as the traditional hypothesis testing set up, hence reducing the cost and increasing the speed of inference. Inspired by the previous work (Scott, 2015; Gabillon et al., 2012; Maron and Moore, 1993) , We formulate the benchmarking problem as a sequential decision process of choosing the best arm as the results of new queried documents are received. More specifically, our formulation is based on the best arm identification in a multi-armed bandit (MAB) decision process. This allows us to adapt Thompson Sampling (Thompson, 1933) and its variants (Russo, 2016) to efficiently solve the resulting MAB problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 351, |
|
"text": "(Scott, 2015;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 374, |
|
"text": "Gabillon et al., 2012;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 397, |
|
"text": "Maron and Moore, 1993)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 731, |
|
"text": "(Thompson, 1933)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 762, |
|
"text": "(Russo, 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A crucial difference between the MAB approach and the traditional hypothesis testing is that it is a sequential testing process, instead of a static testing process which forces the benchmarker to wait for a final answer at the end of an experiment. As such, we need to model the uncertainty regarding the estimated F-measure of each competing system, and continually update it as each new document is queried. We propose a novel hierarchical model for this purpose, which is generally applicable to document-level evaluation tasks based on F-measure. The inference in our model is done using standard sampling techniques, such as Gibbs sampling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We analyse empirically the performance of our approach versus the standard hypothesis testing baselines on synthetic datasets as well as real data for the tasks of sentiment classification and named entity recognition. The empirical results confirm that the number of query documents needed to achieve a particular statistical significance level with our approach is much lower than that required by the hypothesis testing baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our aim is to identify the best system among a finite set of systems based on the noisy sequential measurements of their quality. We formulate this problem as the best arm selection in multiarmed bandit. MAB is a sequential decision process where at each time step n an arm a n from the collection of K slot machines is chosen and played by the gambler. Each arm a \u2208 {1, . . . , K} is associated with an unknown reward distribution f (y|\u03b8 a ) from which the reward is generated when the arm is pulled. In the best arm selection problem, the gambler's goal is to select the arm which has the highest expected reward.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Best Arm Selection in Multi-Armed Bandit", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the common formulation of the MAB, the gambler wants to maximise his cumulative rewards. Intuitively, maximising cumulative rewards eventually leads to the selection of the best arm since it is the optimal decision. However, (Bubeck et al., 2009) gives a theoretical analysis that any strategies for optimising cumulative reward is suboptimal in identifying the best performing arm. To this end, several algorithms have been proposed for the best arm selection e.g. (Maron and Moore, 1993; Gabillon et al., 2012; Russo, 2016) . Although originally developed for maximising cumulative rewards, (Chapelle and Li, 2011; Scott, 2015) provide extensive empirical evidence for the practical success of the Thompson Sampling algorithm for the best arm selection. In what follows, we present Thompson Sampling (TS) and one of its variants, called Pure Exploration TS (PETS), designed specifically for the best arm selection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 249, |
|
"text": "(Bubeck et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 492, |
|
"text": "(Maron and Moore, 1993;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 515, |
|
"text": "Gabillon et al., 2012;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 528, |
|
"text": "Russo, 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 619, |
|
"text": "(Chapelle and Li, 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 632, |
|
"text": "Scott, 2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Best Arm Selection in Multi-Armed Bandit", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let us denote by (a 1 , y 1 ), . . . , (a T , y T ) the sequence of pulled arms and the revealed rewards, a t is the arm pulled at time step t and y t is its associated reward. Note that this sequence is continually growing as the experiment progresses and new arms are pulled. Let f (y|\u03b8 a ) be the probability distribution to model the unknown reward function of the arm a. Had we known the parameters of the reward functions, the best arm could then be selected as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "argmax a E f (y|\u03b8a) [y] = argmax a f (y|\u03b8 a )ydy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Let us denote the collection of all parameters by \u0398 := (\u03b8 1 , . . . , \u03b8 K ). Assuming a prior over the parameters \u03c0 0 (\u0398), we take a Bayesian approach and reason about the posterior of the parameters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u03c0 T (\u0398) = \u03c0 0 (\u0398)L T (\u0398) \u0398 \u03c0 0 (\u0398 )L T (\u0398 )d\u0398", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u0398 is the parameter domain, and L T (\u0398) is the likelihood of the observed data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "{(a t , y t )} T 1 L T (\u0398) := T t=1 f (y t |\u03b8 at ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The posterior probability that a particular arm a is optimal (i.e. has the highest expected reward) is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u03b1 T,a := \u0398a \u03c0 T (\u0398)d\u0398", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where \u0398 a is the set of those parameter values under which the arm a would be selected as the optimal arm:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u0398 a := {\u0398 \u2208 \u0398|E f (y|\u03b8a) = argmax a E f (y|\u03b8 a ) }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In Thompson Sampling the next arm to pull is sampled according to the posterior probability of the arm being optimal. That is, an arm a is selected with probability \u03b1 T,a . Efficient implementation of Thompson Sampling generates a sample from \u03b1 T,a indirectly by first generating a sampl\u00ea \u0398 from \u03c0 n (\u0398) and then selecting the next arm to pull by argmax a E f (y|\u03b8a) [y] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 370, |
|
"text": "[y]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Pure Exploration TS (PETS) Initialization: Pull each arm once t = K while termination condition is not met d\u00f4 Thompson sampling can perform poorly for the best arm identification problem. The reason is that once it discovers a particular arm is performing well, it becomes overconfident and almost always selects that arm in the future iterations. In case that arm is not the best arm, it takes a long time for the algorithm to divert from it. For example, if \u03b1 T,a = 90%, then the algorithm selects an arm other than a on average on every 10 iterations, which would make it significantly longer to get to a point where \u03b1 T ,a = 95%, i.e. the point where the algorithm terminates with confidence 95% in a different arm a . Let \u03b1 T := (\u03b1 T,1 , . . . , \u03b1 T,K ) be the vector of arm probabilities to be optimal. Pure Exploration Thompson Sampling (Russo, 2016) addresses the above deficiency of Thompson Sampling by throwing away, with probability \u03b2, the arm a sampled from \u03b1 T . Instead, it samples another arm b = a with the probability proportional to \u03b1 T,b . The exploration parameter \u03b2 prevents the algorithm from exclusively focusing on one arm. Usually \u03b2 is set to .5 but we empirically investigate other values for this parameter in \u00a74. We can revert to basic Thompson Sampling by setting \u03b2 = 1 in PETS. Similar to Thompson Sampling, this arm selection method can be efficiently implemented as shown in Algorithm 1. We terminate the algorithm when it reaches a maximum number of queries or when \u03b1 T,a \u2265 1 \u2212 \u03b4, where \u03b4 is the confidence parameter provided in the input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u0398 \u223c \u03c0t(\u0398) a \u2190 argmax k E f (y|\u03b8 k ) [y] r \u223c uniform(0, 1) if r \u2264 \u03b2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Thompson Sampling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this section we present a novel hierarchical Bayesian model for capturing the uncertainty over systems' F-measures, as the prediction outcome on new query documents are received. We present this model for F-measure, however, we note that it can be extended for other performance measures as well. F-measure is defined as the harmonic mean of the precision and recall:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "F-measure := 2 1 precision + 1 recall precision := T P T P + F P , recall := T P T P + F N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where (T P, F P, T N, F N ) denote true positive, false positive, true negative, and false negative counts. These counts result from comparing the predictions of a system with the ground truth annotations, and they sum to the total number of annotated data items N . We denote the normalised version of the counts by rates (T P ,F P ,T N ,F N ), which are derived by dividing the raw counts by N . Importantly, the rate statistics are enough to calculate precision, recall, and F-measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Instead of modelling the uncertainty over the Fmeasure of a system directly, we model the uncertainty over its rate statistics. Any distribution over (T P ,F P ,T N ,F N ) then induces a distribution over F-measure. The benefit of working with the rate statistics is that they relate more naturally to the observed (T P, F P, T N, F N ) counts, as established in our generative model in the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "More specifically, we assume a hierarchical model to generate the rate statistics of the systems and the observed (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "T P d , F P d , T N d , F N d ) counts over a collection of documents d \u2208 D.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For each system, we draw its rate statistics \u03b8 k := (T P k ,F P k ,T N k ,F N k ) from a Dirichlet prior. To generate the counts statistics resulting from applying the system a t on the document d t , we first generate a document-specific rate vector \u00b5 dt from a Dirichlet distribution centred around \u03b8 at . Note that including explicit document-specific rates \u00b5 dt in the model (from which the binomial counts are drawn) is necessary in order to allow for sufficient variation in the observed error rates across documents, due to the inherent differences in difficulty of labelling different documents. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u03b8 0 \u03b8 k k \u2208 [1 . . . K] t \u2208 [1 . . . T ] c dt a t \u00b5 dt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Figure 1: The graphical model for the probabilistic generation of a system's parameters \u03b8 k and a document's counts c dt , as the selected system a t is applied onto the document d t at the time step t. The observed quantities are shaded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We then generate the observed counts c dt := (T P dt , F P dt , T N dt , F N dt ) from the Bionomial distribution with parameters \u00b5 dt and N dt , where N dt is the number data items in d t . In summary, the generative model is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2200k \u2208 [1..K] : \u03b8 k \u223c Dirichlet(\u03b8 0 , \u03b1 0 ) \u2200t \u2208 [1..T ] : \u00b5 dt \u223c Dirichlet(\u03b8 at , \u03b1) c dt \u223c Bionomial(\u00b5 dt , N dt )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where \u03b1 0 and \u03b1 are the concentration parameters, which we set to 1 in our experiments in \u00a74. Figure 1 depicts the graphical model. For inference, the quantities of interest are the unknown rates for the systems {\u03b8 k } K k=1 . The observed quantities are document-specific counts {c dt } T t=1 , and we would like to marginalise out the latent document-specific rate variables {\u00b5 dt } T t=1 . We resort to Gibbs sampling for inference in our model. That is, we iteratively select a hidden variable and sample a value from its posterior given all the other variables are fixed to their current values. In our experiments, we collect 1000 samples from the posterior. 2 Algorithm 2 depicts the samplingbased inference for the posterior embedded in the PETS algorithm for the best system identification.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 100, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "F-measure is a frequently used evaluation measure, which can straightforwardly be parametrized to allows for varying the importance of precision ability in (TP,FP,TN,FN) counts, we had to use a Dirichlet-Compound-Multinomial with shared Dirichlet prior rather than a simple Multinomial with a shared Dirichlet Prior.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2 We make use of the JAGS (Just Another Gibbs Sampler) toolkit (Plummer, 2003) for inference in our model. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 78, |
|
"text": "(Plummer, 2003)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Probabilistic Generative Model of F-measure", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "k \u2208 [1..K] do 2: D k \u2190 NextDoc(k, D) 3: S k \u2190 {\u03b8j|\u2200j \u2208 [1..J] :\u03b8j", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Gibbs \u223c \u03c0(\u03b8j|D k )} 4: end for 5: while termination condition is not met do 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for k \u2208 [1..K] do 7: f k \u223c {Fmeasure(\u03b8)|\u03b8 \u2208 S k } 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end for 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a \u2190 argmax k f k 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "r \u223c uniform(0, 1) 11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b \u2190 a 12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if r > \u03b2 then 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "while b = a do 14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for k \u2208 [1..K] do 15: f k \u223c {Fmeasure(\u03b8)|\u03b8 \u2208 S k } 16:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end for 17:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "b \u2190 argmax k f k 18:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end while 19:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end if 20:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "D b \u2190 D b \u222a NextDoc(b, D) 21: S b \u2190 {\u03b8j|\u2200j \u2208 [1..J] :\u03b8j Gibbs \u223c \u03c0(\u03b8j|D b )} 22: end while versus recall: F \u03b2 -measure := 2 \u03b2 precision + 1\u2212\u03b2 recall", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b2 is a parameter trading off precision and recall. We note that our approach can be applied straightforwardly to F \u03b2 -measure to put more weight on precision or recall where appropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 2 Identifying the best system", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We designed two sets of experiments to examine the efficiency and performance of each algorithm using synthetic data as well as real data for sentence level sentiment classification and named entity recognition tasks. With the synthetic data, we analyse our probabilistic generative model for F-measure in combination with the arm selection algorithms. With the real data, we showcase the statistical efficiency of our best system identification approach compared to the standard hypothesis testing approach (Dem\u0161ar, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 508, |
|
"end": 522, |
|
"text": "(Dem\u0161ar, 2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the real data experiments, we define the \"best system\" as the system with the highest F-measure based on all documents in the collection. The \"success rate\" in the NER/Sentiment tasks is then simply the percentage of times the best system is correctly identified (i.e. ranked highest when the system selection algorithm is terminated) over multiple runs of the selection algorithm on random reorderings of the document collection. We emphasise that, in these experiments, we simulate a scenario where the aim is to select the best system with the minimum number of queries to showcase the effectiveness of our approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Empirical Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As baselines, we consider the minimum number of documents needed by the standard statistical power approach. The power of a binary hypothesis testing is the probability that the test correctly rejects the null hypothesis (H 0 ) when the alternative hypothesis (H 1 ) is true. In order to find a lowerbound for the number of documents, we make use of the power calculation for a paired T-Test.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The T-Test indicates whether or not the difference between two groups' averages most likely reflects a \"real\" difference in the population from which the groups were sampled. Assuming we have two competing systems, we can set up a T-Test to assess whether there is a meaningful difference between the F-measures of the two systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We assume an efficient experimental design where the same number of (identical) documents are sent to each system. Assuming a typical power setting of 80% and a significance level of 5%, we can calculate an \"Oracle baseline\" by making use of the true effect size (the standardised difference in mean performance) across the top two systems. 3 Obviously this quantity would not be known apriori of running the experiment, hence the sample size calculated based on this effect size provides a lower-bound on the number of samples that ought be needed 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 342, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Across the systems, average performance on individual documents will vary due to variations in the inherent difficulty of each document. In other words, some documents are harder to label than others. Thus we make use of a paired sample test for the power calculation. Effect sizes are calculated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 For the synthetic experiments, the variation in difficulty of the documents is not modelled, so we calculate the effect size by simply using the parameters of the simulation as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u00b5 1 \u2212\u00b52 \u221a (\u03c3 2 1 +\u03c3 2 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", where \u00b5 1 is the mean performance on the best system and \u00b5 2 is the mean performance on the second best system (likewise for the standard deviations \u03c3 1 and \u03c3 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 For the experiments on real data, variation will be document dependent and hence we calculate the effect size as AV G(f 1 \u2212f 2 ) ST DEV (f 1 \u2212f 2 ) , where we directly measure the average and standard deviation of the performance differences between the top performing APIs across the documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Since we are comparing many APIs at once, and a priori of running the experiment we don't know which two systems are the best, we make use of two settings for the confidence level (aka P-value threshold) for the power calculation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline 2 : Assume the top two systems are known a priori and use the significance level of 5% directly to produce a lower bound on the number of iterations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline K\u22121 : Assume one system is new and is being compared with all other k \u2212 1 APIs. We reduce the required significance level \u03b1 using a Bonferroni correction to be \u03b1/(k \u22121) to take into account the k \u2212 1 comparisons being performed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We stress that this is an unrealistic scenario in which the effect size is known before running the experiment. If this value is not known or needs to be estimated before the experiment a much larger value would be used, for example a value based on the error threshold might be appropriate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Datasets. For the synthetic data, we generate the (T P, F P, T N, F N ) counts of applying the competing systems on hypothetical documents, assuming that we know the systems' true rates. An important factor in the difficulty of the problem is the different in the Fscore of the top two performing systems, which we denote by margin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We consider three levels of problem difficulty by considering the margin m \u2208 {.01, .025, .05}, and for each margin we consider 5 configurations whose competing systems have the specified margin. Having the true (T P ,F P ,T N ,F N ) rates for a competing system, the count statistics for its results on hypothetical documents are Table 1 : Average number of queries across different margins. The number of systems is 5, and the maximum number of queries is set to 2000, and \u03b4 = 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 337, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "then generated based on our generative model. For each competing configuration, we repeat the experiment multiple times in order to account for the randomness inherent in the algorithms and the generated documents. In different experiments, we let the number of competing systems K be {5, 10, 20}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Margin and the number of systems. In this experiment, we investigate the relation between the margin and the number of documents queried by each algorithm. Intuitively, as the margin between the top performing systems decreases, more queries are required to segregate the best system among the top performing ones. We run each algorithm for 500 times on the competing configurations for each margin with K = 5. The maximum number of queries allowed is 2000, and the algorithm can terminate earlier as soon as \u03b1 T,a \u2265 .95, i.e. \u03b4 = 0.05. Table 1 summarises the average number of queries and the success rates of TS and PETS in combination with our hierarchical Bayesian model for F-measure across different margins. We see that the number of query documents increases as the margin decreases. It is also worth noting that PETS requires slightly smaller number of queries than Thompson Sampling. Interestingly, the number of samples required by the hypothesis testing baselines is much more than that required by the TS/PETS combined with our hierarchical model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 537, |
|
"end": 544, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We then ask the question whether the number of competing systems is important. Table 2 summarises the average number of queries and the success rate of each algorithm on the competing configurations for the margin 0.05 for varying number of systems K \u2208 {5, 10, 20}. As seen, the number of queries increases (sub)linearly with the number of competing systems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 86, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Hierarchical vs Gaussian. We compare our hierarchical model for capturing the uncertainty over F-measure with the Gaussian distribution. That is, we associate a Gaussian distribution to each system to model its posterior over the F-measure. The use of the Gaussian distribution to model the mean of sampled F-measures is motivated by the law of large numbers. This approach directly models the uncertainty of a system's F-measure, as opposed to our indirect modelling approach where posterior distribution is constructed using the distribution of (F P ,F N ,T P ,T N ) rates. Tables 1 and 2 show the average number of queries and success rates for algorithms using our hierarchical model vs the Gaussian distribution based model. The general trend is that using the Gaussian model in TS/PETS requires significantly more queries compared to the hierarchical model as well as the baselines. Needing more queries compared to the baselines highlights the importance of choosing the right distribution for capturing the uncertainty over the F-measure in TS/PETS. Needing more queries compared to the hierarchical variant is somewhat expected as the synthetic data is generated according to the hierarchical model. However, we will see similar trends in the experiments on the real data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 576, |
|
"end": 590, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Synthetic Data Experiments", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We consider the task of sentence level sentiment prediction for medical documents. The aim is to benchmark systems according to how well they can predict the polarity of sentences contained in a medical report, where each report corresponds Dataset. We make use of a biomedical corpus (Martinez et al., 2015) consisting of CT reports for fungal disease detection collected from three hospitals. For each report, only the free text section were used, which contains the radiologist's understanding of the scan and the reason for the requested scan as written by clinicians. Every report was de-identified: any potentially identifying information such as name, address, age/birthday, gender were removed. There are a total of 358 test documents, where the average number of sentences per document is 23.", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 308, |
|
"text": "(Martinez et al., 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Competing Systems. We make use of a variant of the coarse-to-fine model proposed in (McDonald et al., 2007) for sentiment analysis. Briefly speaking, the model couples the sentiment of the sentences contained in a report with the overall sentiment of the report. We train four versions of the model, each of which corresponds to a different training condition:", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 107, |
|
"text": "(McDonald et al., 2007)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 M full :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where the model is trained on the fully annotated data D F , i.e. the data annotated at both the sentence and report level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 M partial : where the model is trained on both D F and the partially annotated data D P in which the sentence level annotation is missing but the reports are labeled.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 M unlab : where the model is trained on D F and D U in which the annotation is missing at both sentence and report level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 M all : where the model is trained on all of the available data described above. Table 3 : Sentiment classification for biomedical reports with 4 competing models. The maximum number of queries is set to 500, and \u03b4 = 0.05.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 90, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We expect M all to outperform the other models. The aim is to analyse the behaviour of our best system selection methods on real data compared to the baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Results. Table 3 presents the results. As seen the number of queries needed for the TS/PETS combined with the hierarchical model is much less than that of the baselines and the Gaussian variant.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 16, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentiment Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In our second set of experiment, we attempt to see how our frameworks and F 1 models perform using realistic data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "MASC Corpus. For benchmarking the NER systems, we use the Manually Annotated Sub-Corpus (MASC) (Ide et al., 2008) that includes 19 different domains. The corpus consists of approximately 500K words of contemporary American English written and spoken data drawn from the Open American National Corpus (OANC). This corpus includes a wide variety of linguistic annotations with a balanced selection of texts from a broad range of genres/domains. The diversity of the corpus will enable us to assess the robustness of tools across different domains. The number of documents in the MASC corpus is about 392.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 113, |
|
"text": "(Ide et al., 2008)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Competing Systems. We evaluate the performance of 5 popular NER systems implemented as API in third party implementations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 OpenNLP (Ingersoll et al., 2013) : The Apache OpenNLP library is a machine learning based toolkit for the text processing. It is based on the maximum entropy and perceptron.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2, |
|
"end": 34, |
|
"text": "OpenNLP (Ingersoll et al., 2013)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Stanford NER (Finkel et al., 2005) : It is based on linear chain Conditional Random Field (CRF) sequence models. It is part of the Stanford CoreNLP, which is an integrated suite of NLP tools in Java.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 36, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 ANNIE (Cunningham et al., 2002) : ANNIE uses gazetteer-based lookups and finite state machines for entity identification and classification. It can recognise persons, locations, organisations, dates, addresses and other named entity types. ANNIE is part of the GATE framework. It can be used as a Web Service but it also provides its own interface for independent use.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 33, |
|
"text": "(Cunningham et al., 2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Meaning Cloud (MeaningCloud-LLC, 1998): It is based on a hybrid approach combining machine learning with a rule based system. The software is available as a cloud based solution and on-premise as a plugin module for the GATE framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 LingPipe (Alias-i, 2008): It is a set of Java libraries developed by Alias-I for NLP. The NER component is based on a 1st-order Hidden Markov Model with variable-length ngrams as the feature set and uses the IOB annotation scheme for the output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Results. Table 4 presents the results. As seen the number of queries needed for the TS/PETS combined with the hierarchical model is much less than that of the baselines and the Gaussian variant. Table 4 : Named entity recognition on MASC documents with 5 competing systems. The maximum number of queries is set to 2000, and \u03b4 = .05.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 16, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 202, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Named Entity Recognition", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We have presented a novel approach for benchmarking NLP systems based on the multi-armed bandit (MAB) problem. We have proposed a hierarchical generative model to represent the uncertainty in the performance measures of the competing systems, to be used by the Thompson Sampling algorithm to solve the resulting MAB problem. Experimental results on both synthetic and real data show that our approach requires significantly fewer queries compared to the standard benchmarking technique to identify the best system according to F-measure. Future work includes applying our approach to other NLP problems, particularly emerging document-level problem settings such document-wise machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In other words, in order to model the observed vari-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the power calculation, we use the following R command power.t.test(delta=effect, sd=1, sig.level=0.05, power=.8, type=\"paired\", alternative=\"one.sided\").4 Note that the T-Test assumes Gaussian distributed data, but the violation of this assumption is unlikely to greatly effect the sample size estimates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "G. H. appreciates fruitful discussions with Mohammad Ghavamzadeh and Yasin Abbasi-Yadkori. We are grateful to reviewers for their insightful feedback and comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Pure exploration in multi-armed bandits problems", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Bubeck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Munos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gilles", |
|
"middle": [], |
|
"last": "Stoltz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "International conference on Algorithmic learning theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00e9bastien Bubeck, R\u00e9mi Munos, and Gilles Stoltz. 2009. Pure exploration in multi-armed bandits prob- lems. In International conference on Algorithmic learning theory, pages 23-37. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An empirical evaluation of thompson sampling", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2249--2257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Chapelle and Lihong Li. 2011. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pages 2249- 2257.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications", |
|
"authors": [ |
|
{ |
|
"first": "Hamish", |
|
"middle": [], |
|
"last": "Cunningham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Maynard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalina", |
|
"middle": [], |
|
"last": "Bontcheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valentin", |
|
"middle": [], |
|
"last": "Tablan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: A Framework and Graphical Development Envi- ronment for Robust NLP Tools and Applications. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Statistical comparisons of classifiers over multiple data sets", |
|
"authors": [ |
|
{ |
|
"first": "Janez", |
|
"middle": [], |
|
"last": "Dem\u0161ar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "1--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janez Dem\u0161ar. 2006. Statistical comparisons of clas- sifiers over multiple data sets. J. Mach. Learn. Res., 7:1-30, December.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Incorporating non-local information into information extraction systems by gibbs sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, pages 363-370. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Best arm identification: A unified approach to fixed budget and fixed confidence", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Gabillon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Ghavamzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Lazaric", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3212--3220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. 2012. Best arm identification: A unified approach to fixed budget and fixed confi- dence. In Advances in Neural Information Process- ing Systems, pages 3212-3220.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Masc: The manually annotated sub-corpus of american english", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Collin", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC. Citeseer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Ide, Collin Baker, Christiane Fellbaum, and Charles Fillmore. 2008. Masc: The manually an- notated sub-corpus of american english. In In Pro- ceedings of the Sixth International Conference on Language Resources and Evaluation (LREC. Cite- seer.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Taming text: how to find, organize, and manipulate it", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Grant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ingersoll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew L", |
|
"middle": [], |
|
"last": "Morton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Farris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grant S Ingersoll, Thomas S Morton, and Andrew L Farris. 2013. Taming text: how to find, organize, and manipulate it. Manning Publications Co.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Hoeffding races: Accelerating model selection search for classification and function approximation. Robotics Institute", |
|
"authors": [ |
|
{ |
|
"first": "Oded", |
|
"middle": [], |
|
"last": "Maron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew W", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oded Maron and Andrew W Moore. 1993. Hoeffd- ing races: Accelerating model selection search for classification and function approximation. Robotics Institute, page 263.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Structured models for fine-to-coarse sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kerry", |
|
"middle": [], |
|
"last": "Hannan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tyler", |
|
"middle": [], |
|
"last": "Neylon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Wells", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Reynar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "432--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured mod- els for fine-to-coarse sentiment analysis. In Pro- ceedings of the 45th Annual Meeting of the Associ- ation of Computational Linguistics, pages 432-439, Prague, Czech Republic, June. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Meaning Cloud", |
|
"authors": [ |
|
{ |
|
"first": "-Llc", |
|
"middle": [], |
|
"last": "Meaningcloud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MeaningCloud-LLC. 1998. Meaning Cloud. Avail- able at: https://www.meaningcloud.com/.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling", |
|
"authors": [ |
|
{ |
|
"first": "Martyn", |
|
"middle": [], |
|
"last": "Plummer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 3rd International Workshop on Distributed Statistical Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martyn Plummer. 2003. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd International Workshop on Distributed Statistical Computing.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Simple bayesian algorithms for best arm identification", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Russo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1602.08448" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Russo. 2016. Simple bayesian algo- rithms for best arm identification. arXiv preprint arXiv:1602.08448.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Multi-armed bandit experiments in the online service economy", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Steven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Applied Stochastic Models in Business and Industry", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "37--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven L Scott. 2015. Multi-armed bandit experiments in the online service economy. Applied Stochastic Models in Business and Industry, 31(1):37-45.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1933, |
|
"venue": "Biometrika", |
|
"volume": "25", |
|
"issue": "3-4", |
|
"pages": "285--294", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W.R. Thompson. 1933. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4):285- -294.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Require: K: Number of arms, J: Number of samples from posterior \u03c0(.), D: Document collection, Fmeasure(T P ,T N ,F P ,F N ) := 2T P 2T P +F P +F N , NextDoc(a, D): Next document for an arm a from D 1: for", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Average number of queries across different number of competing systems. The margin is 0.05, and the maximum number of queries is set to 2000, and \u03b4 = 0.05.", |
|
"content": "<table><tr><td>to a patient.</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |