ACL-OCL / Base_JSON /prefixC /json /clssts /2020.clssts-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:49.355189Z"
},
"title": "Reformulating Information Retrieval from Speech and Text as a Detection Problem",
"authors": [
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Rabih",
"middle": [],
"last": "Zbib",
"suffix": "",
"affiliation": {
"laboratory": "Avature",
"institution": "",
"location": {
"country": "Spain"
}
},
"email": "[email protected]"
},
{
"first": "William",
"middle": [],
"last": "Hartmann",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the IARPA MATERIAL program, information retrieval (IR) is treated as a hard detection problem; the system has to output a single global ranking over all queries, and apply a hard threshold on this global list to come up with all the hypothesized relevant documents. This means that how queries are ranked relative to each other can have a dramatic impact on performance. In this paper, we study such a performance measure, the Average Query Weighted Value (AQWV), which is a combination of miss and false alarm rates. AQWV requires that the same detection threshold is applied to all queries. Hence, detection scores of different queries should be comparable, and, to do that, a score normalization technique (commonly used in keyword spotting from speech) should be used. We describe unsupervised methods for score normalization, which are borrowed from the speech field and adapted accordingly for IR, and demonstrate that they greatly improve AQWV on the task of cross-language information retrieval (CLIR), on three low-resource languages used in MATERIAL. We also present a novel supervised score normalization approach which gives additional gains. * While at Raytheon BBN Technologies.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In the IARPA MATERIAL program, information retrieval (IR) is treated as a hard detection problem; the system has to output a single global ranking over all queries, and apply a hard threshold on this global list to come up with all the hypothesized relevant documents. This means that how queries are ranked relative to each other can have a dramatic impact on performance. In this paper, we study such a performance measure, the Average Query Weighted Value (AQWV), which is a combination of miss and false alarm rates. AQWV requires that the same detection threshold is applied to all queries. Hence, detection scores of different queries should be comparable, and, to do that, a score normalization technique (commonly used in keyword spotting from speech) should be used. We describe unsupervised methods for score normalization, which are borrowed from the speech field and adapted accordingly for IR, and demonstrate that they greatly improve AQWV on the task of cross-language information retrieval (CLIR), on three low-resource languages used in MATERIAL. We also present a novel supervised score normalization approach which gives additional gains. * While at Raytheon BBN Technologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When an information retrieval system is used as a support tool in a decision-making process, the user is mainly interested in whether the data under consideration contains (or, is relevant to) any of the queries of interest. For example, consider the case of streaming audio where actions must be made based upon a query detection. As each document is processed, a binary decision must be made about relevance for each query 1 . Clearly, when dealing with a decision operation, the most appropriate way to measure system performance (from an operational viewpoint) is to incorporate the two error sources that affect a user's experience: misses and false alarms. Minimizing a linear combination of these two errors is a very reasonable optimization objective, and it was chosen by the IARPA MATERIAL program as the main performance measure. Specifically, the AQWV measure is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "AQW V = 1 \u2212 pMiss \u2212 \u03b2 pFA.",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "pMiss is the average per-query miss rate and is defined as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pMiss = 1 |Q r | q\u2208Qr # misses of q # refs of q ,",
"eq_num": "(2)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "where Q r is the set of queries with references in the data (i.e., each has at least one relevant document). The number of references and the number of misses of query q is computed based on the whole document collection C under consideration. pFA, the average per-query false alarm rate, is defined as follows",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pFA = 1 |Q| q\u2208Q # FAs of q |C | -# refs of q ,",
"eq_num": "(3)"
}
],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The constant \u03b2 in Equation (1) changes the relative importance of the two types of error (\u03b2 = 40 in MATERIAL).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Note that this measure assumes a single decision threshold, which means that all detection scores, over all queries, have to be commensurate. In this paper, we present techniques for transforming the detection scores that are generated by an IR system so that they are comparable across queries. The paper is organized as follows: Section 2 gives a short summary of previous work on score normalization. Section 3 presents a supervised method for score normalization, adapted to IR. Section 4 describes the experimental setup and presents results on three low-resource languages used in the IARPA MATERIAL program: Somali, Swahili and Tagalog. Finally, Section 5 contains concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "AQWV is very similar to the Average Term Weighted Value (ATWV) (Fiscus et al., 2007) , which was first used in the NIST 2006 Spoken Term Detection evaluation and then in the IARPA BABEL program (Bab, 2011) for keyword spotting from speech. As was argued in (Karakos et al., 2013) and elsewhere, generating commensurate detection scores is important for optimizing this performance measure. The main difference between ATWV and AQWV is in the granularity of the detections: keyword spotting tries to find all occurrences of a keyword of interest, no matter how many times it is spoken in a speech document. By contrast, the IR task we consider here is about retrieving whole documents that contain the query of interest, but without the need to pinpoint its exact location in the document. In other words, the granularity of the keyword spotting task is at the second (or fraction of second) level, while the granularity of the information retrieval task is at the document level. So, when computing the denominators in pMiss and pFA, AQWV uses number of documents, not number of occurrences or number of seconds as in ATWV. For this reason, the range of AQWV is [\u2212\u03b2, 1] (as opposed to (\u2212\u221e, 1] for ATWV). (Wegmann et al., 2013 ) contains a detailed discussion of ATWV; most of the salient points also apply to",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Fiscus et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 257,
"end": 279,
"text": "(Karakos et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1204,
"end": 1225,
"text": "(Wegmann et al., 2013",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "A number of unsupervised score normalization approaches have been developed for keyword spotting. pFA normalization was introduced in (Zhang et al., 2012) and used again in (Karakos et al., 2013) . Keyword-specific thresholds (KST) (Karakos et al., 2013) is the most principled approach, as it is derived from fundamental theorems of decision theory. Sum-to-one (STO) (Wu, 2012; Mamou et al., 2013 ) is yet another popular approach, which was initially applied to problems in IR and later to keyword spotting. An in-depth comparison of these last two techniques appears in (Wang and Metze, 2014) , and, since we use them in our experiments, we give more details about them below (KST is renamed QST for obvious reasons). A version of QST was also used more recently in (Shing et al., 2019) for CLIR as well.",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Zhang et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 173,
"end": 195,
"text": "(Karakos et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 232,
"end": 254,
"text": "(Karakos et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 368,
"end": 378,
"text": "(Wu, 2012;",
"ref_id": "BIBREF18"
},
{
"start": 379,
"end": 397,
"text": "Mamou et al., 2013",
"ref_id": "BIBREF7"
},
{
"start": 573,
"end": 595,
"text": "(Wang and Metze, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 769,
"end": 789,
"text": "(Shing et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AQWV.",
"sec_num": null
},
{
"text": "This method estimates a query-specific threshold t(q), assuming the un-normalized scores are posterior probabilities or posterior-like numbers between 0 and 1. As mentioned in Section 1, the AQWV and ATWV metrics are similar, allowing us to use the same optimality reasoning to compute query-specific thresholds t(q). Decision theory tells us that the optimal threshold is where the expected cost of a false alarm and miss are equal. With some algebra, it can be shown that the \"optimal\" decision thresholds are given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t * (q) = \u03b2 N true (q) |C | + (\u03b2 \u2212 1)N true (q)",
"eq_num": "(4)"
}
],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "where N true (q) is the number of documents that are truly relevant to query q. This number is unknown, but it can be approximated by the sum of posteriors over the whole collection, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N sum (q) = d\u2208C score(q, d),",
"eq_num": "(5)"
}
],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "where score(q, d) is the retrieval score returned by the core IR system for query q and document d. Then, the normalized scores can either be given by a linear shift, or by the non-linear transformation mentioned in (Karakos et al., 2013 )",
"cite_spans": [
{
"start": 216,
"end": 237,
"text": "(Karakos et al., 2013",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score qst = exp \u2212 log(score) log(t * (q)) ,",
"eq_num": "(6)"
}
],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "which makes the common decision threshold for all queries equal to 1/e \u2248 0.3679. This is the decision threshold we use for computing AQWV in the QST results of Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query-Specific Thresholds (QST)",
"sec_num": null
},
{
"text": "This method, mentioned in (Wu, 2012; Mamou et al., 2013) , performs a per-query normalization so that the normalized detections of a query over the whole document collection sum to one. In other words,",
"cite_spans": [
{
"start": 26,
"end": 36,
"text": "(Wu, 2012;",
"ref_id": "BIBREF18"
},
{
"start": 37,
"end": 56,
"text": "Mamou et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sum-to-One Score (STO)",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score sto = score N sum (q) ,",
"eq_num": "(7)"
}
],
"section": "Sum-to-One Score (STO)",
"sec_num": null
},
{
"text": "where N sum (q) is given by (5). Unlike QST, this method does not produce a decision threshold. As mentioned in (Mamou et al., 2013) , the decision threshold can be determined based on performance on a tuning set. In our experiments, we estimate the decision threshold on the training set and apply it on the two other datasets (Tune/Test).",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Mamou et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sum-to-One Score (STO)",
"sec_num": null
},
{
"text": "0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Swahili\u2212Text pFA(%) pMiss(%) q Swahili\u2212Text pFA(%) pMiss(%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sum-to-One Score (STO)",
"sec_num": null
},
{
"text": "Supervised (machine learning) techniques for score normalization focused on extracting a number of features and using them in a discriminative learning framework to directly compute the probability that a keyword is present in a specific location in the audio. For example, the authors in (Wang et al., 2009) used lattice-derived confidence scores as features in a MLP and SVM to come up with calibrated scores that significantly improved ATWV. In (Pham et al., 2014) , they used features such as posterior probability, number of vowels, how many other competing arcs were present in the ASR lattice, etc., in a MLP to compute posterior-like scores, which were subsequently normalized with KST or STO. In (Lv et al., 2016) , the features used were just the original posterior and KST-normalized score, but these were computed a few times, using different subword units. Finally, in (Soto et al., 2014) , a large number of features (both related to posteriors in confusion networks and their transformations, as well features derived from acoustics, phonetic dictionary, etc.) was used in a SVM framework, which led to significant improvements over the unsupervised methods. Many references related to keyword spotting and score normalization can also be found in (Tejedor et al., 2015) . Figure 1 shows a comparison of the DET curves for the un-normalized and normalized outputs of a CLIR system. There is a significant gain from normalization, especially around the range of values where the maximum AQWV (i.e., MQWV) is attained.",
"cite_spans": [
{
"start": 289,
"end": 308,
"text": "(Wang et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 448,
"end": 467,
"text": "(Pham et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 705,
"end": 722,
"text": "(Lv et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 882,
"end": 901,
"text": "(Soto et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1263,
"end": 1285,
"text": "(Tejedor et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 1288,
"end": 1296,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Previous Supervised Techniques",
"sec_num": null
},
{
"text": "Our approach to supervised score normalization is to (i) use an optimization framework that directly optimizes the measure of interest (AQWV), and (ii) use features that are both functions of the query and the document, without making any assumptions about whether we deal with speech or text (our approach has to be able to work well with both, so, it cannot rely on the presence of speech lattices or confusion networks, in contrast to the aforementioned approaches). We generate several features-functions of the corpus, query, and the original retrieval score-and then weight them appropriately. We learn the feature weights so that, when thresholded, the combined score maximizes the performance metric. We assume that each query-document pair (q, d) in the training data is labeled for relevance (0/1). We compute a number of features, such as the log of the following quantities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "\u2022 Original retrieval score(q, d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "\u2022 The QST-transformed score score qst (q, d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "\u2022 The normalized sum N sum (q)/|C |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "\u2022 The three features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "min w\u2208q {score(w, d)}, max w\u2208q {score(w, d)}, avg w\u2208q {score(w, d)},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "where avg w is just the average over all words w in query q (esp. for multi-word queries).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "\u2022 The three features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "min w\u2208q {count(w)}, max w\u2208q {count(w)}, avg w\u2208q {count(w)},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "where count(w) is the count of w in the IR training data (e.g., parallel data used to train the bilingual dictionary for CLIR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "The above features f 1 , . . . , f F , together with the binary labels, are fed into an optimizer that uses Powell's method (Press et al., 2007) , with the goal to learn feature weights \u03b1 = (\u03b1 1 , . . . , \u03b1 F ), as well as an optimal decision threshold t * that maximize AQWV. At each optimization iteration, the weights are used to compute new retrieval scores",
"cite_spans": [
{
"start": 124,
"end": 144,
"text": "(Press et al., 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "score model (q, d) = F i=1 \u03b1 i \u2022 f i and new decisions decision(q, d) = 1[score model (q, d) \u2265 t * ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "During training, AQWV performance is also measured on a \"tuning\" set for early stopping. L2 regularization (which forces the trained weights to have small absolute values, to reduce the risk of overfitting) can also be used by changing the optimization criterion to Train Tune Test Train Tune Test Somali 338 482 478 142 213 222 Swahili 316 449 493 155 217 207 Tagalog 291 460 -171 244 -Table 1 : Size of various datasets (in terms of number of documents).",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 414,
"text": "Train Tune Test Train Tune Test Somali 338 482 478 142 213 222 Swahili 316 449 493 155 217 207 Tagalog 291 460 -171 244 -Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "AQWV(\u03b1, t) \u2212 \u03bb \u2022 L2(\u03b1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Score Normalization",
"sec_num": "3."
},
{
"text": "Note that some of the above features are dependent on various basic properties of the corpus (e.g., number of documents) and of the query set (e.g., OOV rate). In this paper, we do not study the effect of mismatched train/test conditions that may arise, for instance, when train and test corpora are significantly different. A test set that is an order of magnitude larger than the training set can cause significant mismatch in the train/test feature distributions, for the corpus-dependent features we described earlier (such as the QST-transformed score and the normalized sum). We plan to investigate such scenarios in future work. Finally, note that, in lieu of Powell's method, we have also used a MLP framework. However, given that the data on which we train the learner is small, we did not manage to obtain results that generalized better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Audio",
"sec_num": null
},
{
"text": "To show the benefit of normalization and thresholding to IR, we report experimental results on a Cross-language IR (CLIR) task from three different languages to English: Somali, Swahili and Tagalog. Using data from the IARPA MATERIAL program, we report on retrieval of Text and Speech documents. For each genre, we consider three data and query set conditions: (i) Train: A training data set D Train and a training query set Q Train are used for training the normalization model of Section 3 as well as decision thresholds. (ii) Tune: A tuning set D Tune is used, together with Q Train , for evaluating the stopping criterion. (iii) Test: Unseen data set D Test and unseen query set Q Test are used to assess blind performance. Statistics of these corpora appear in Table 1 . As for the query set sizes, all languages have the same number of queries: Q Train consists of 300 queries and Q Test consists of 1000 queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 766,
"end": 773,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Query Sets and Retrieval Corpora",
"sec_num": "4.1."
},
{
"text": "We give a brief description of the CLIR system that is used to generate the original retrieval scores. A more detailed description appears in (Zbib et al., 2019) . It uses a probabilistic bilingual dictionary, trained on a set of parallel sentences and lexicons that were aligned with GIZA++ (Och and Ney, 2003) . For each language pair (Somali-English, Swahili-English and Tagalog-English) the bilingual dictionary provides a translation probability P (e|f ) between a source word f and a target word e. Queries consist of one or more words in the target language (English), and a document is deemed relevant to a query if it contains at least one occurrence of each of the terms of the query. 2",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "(Zbib et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 292,
"end": 311,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "In mathematical terms, for query q and document d, and assuming that T (d) is the set of all translations of all words and phrases in d, the CLIR system computes score(q, d) as follows: (1 \u2212 P (w|f ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "P (d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "Note that (Zbib et al., 2019) performs lexical translation of source-language documents to English instead of translation of the (short) English queries to the source language; the longer context in the source documents gives a more accurate translation. For speech documents, instead of using the translations of the 1-best output of the automatic speech recognition (ASR) system (which could be erroneous) we consider multiple ASR alternatives in the form of a confusion network. The latter allows us to have a probabilistic representation of the content of the foreign document, i.e., probability of occurrence p(f |d) for source word f . This can be used seamlessly in (8), giving rise to a modified formula",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Zbib et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "P (d is relevant to q) = w\u2208q 1\u2212 f \u2208d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "(1\u2212P (f |d)\u2022P (w|f )) (9) Note that the occurrence probabilities of all English terms in the bilingual dictionary can be pre-computed, and accessed at retrieval time using an efficient indexing scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The CLIR System",
"sec_num": "4.2."
},
{
"text": "Parallel training data were used to estimate the probabilistic dictionaries. The data consist mostly of parallel sentences released under the IARPA MATERIAL and IARPA LORELEI (LOR, 2015) programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Training Data",
"sec_num": "4.3."
},
{
"text": "A parallel lexicon downloaded automatically from Panlex (https://panlex.org/) was also included. Training data are completely disjoint from the data mentioned in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Training Data",
"sec_num": "4.3."
},
{
"text": "The amount of transcribed speech available for acoustic model training varied for each language: 48 hours for Somali, 68 hours for Swahili and 128 hours for Tagalog. For language modeling, automatically collected web data (using the techniques of (Zhang et al., 2015) ) were also used. In addition to the MATERIAL data, Swahili and Tagalog also include training data from the IARPA Babel program (Bab, 2011).",
"cite_spans": [
{
"start": 247,
"end": 267,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR System Description",
"sec_num": "4.4."
},
{
"text": "tures associated with the term that constrain the sense or morphology. A document is relevant if at least one place in the foreign source could be translated to the term(s). In our experiments, the CLIR system simplifies the problem by requiring that each of the terms of the query is a possible translation of at least one foreign word in the document, ignoring any of the semantic or syntactic constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ASR System Description",
"sec_num": "4.4."
},
{
"text": "Our ASR systems are trained using the Sage speech processing platform (Hsiao et al., 2016) , which integrates multiple machine learning toolkits, and uses Kaldi (Povey et al., 2011) for acoustic model training. Our acoustic models are pre-trained on 1500 hours of data from 11 languages (Keith et al., 2018) and then fine-tuned to the target language. We use a CNN-LSTM acoustic model, which is similar to the TDNN-LSTM (Cheng et al., 2017) , but with eight additional convolutional layers prepended to the network. Table 2 : Word error rate (WER) performance on a tuning set (known as Analysis1 in the MATERIAL program). Baseline refers to our multilingual CNN-LSTM acoustic model. LM Expansion expands the LM and lexicon using the automatically collected web data. SST further improves the acoustic model with semi-supervised training.",
"cite_spans": [
{
"start": 70,
"end": 90,
"text": "(Hsiao et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 161,
"end": 181,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF10"
},
{
"start": 287,
"end": 307,
"text": "(Keith et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 420,
"end": 440,
"text": "(Cheng et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ASR System Description",
"sec_num": "4.4."
},
{
"text": "While word error rate (WER) is not the metric of interest, we show WER results in Table 2 to give a sense of the task difficulty. Our baseline results use our best acoustic model with the given training data, but the WER is still over 40% for each language. A major difficulty for ASR in the IARPA MATERIAL program is the mismatch between the training and test data. All training data is conversational telephone speech (CTS), while the test data is mostly broadcast data. Expanding the language model (LM) with the collected web data partially overcomes this mismatch and gives more than a 10 point absolute improvement in WER. We further reduce the mismatch through semi-supervised training using the evaluation data (approximately 70 hours).",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ASR System Description",
"sec_num": "4.4."
},
{
"text": "Note that this adaptation is unsupervised and is allowed by the MATERIAL program. During decoding we use standard trigram language models. We perform IR on CNets as it significantly improves performance beyond the one-best. Table 3 (a) contains AQWV results with the various normalization techniques described in the paper (the column \"original\" is without normalization), for the Train and Test retrieval corpora mentioned in Section 4.1. Some observations are in order:",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "ASR System Description",
"sec_num": "4.4."
},
{
"text": "1. Compared to the original system scores, almost all normalization methods give gains on the text genre of all datasets. On the Test condition, the average gain (from the supervised normalization) for the text genre is 258%, while the average gain for the audio genre is 96% relative. This shows that, for measures such as AQWV (that rely on hard decisions) score normalization is of crucial importance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AQWV/MQWV Results",
"sec_num": "4.5."
},
{
"text": "2. In all cases, the supervised, model-based approach, has the best performance on the Test condition among all methods considered. Compared to the best unsupervised method, the supervised approach is 23% bet- 3. QST is substantially better than STO in all cases. This is expected, given that QST is designed specifically for AQWV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AQWV/MQWV Results",
"sec_num": "4.5."
},
{
"text": "Note that, for the Tune and Test conditions, the results of Table 3 (a) were obtained with a decision threshold that was optimal on the Train condition. This, of course, can be suboptimal. For example, the AQWV of the original (un-normalized) system for the Somali-text Test condition is negative because the tuned acceptance threshold is too low, which makes the false alarm rate too high (a decision threshold that does not accept anything gives an AQWV of zero). So, to better understand the effect that score normalization has on the performance of a system and remove the error introduced by the imperfect decision threshold, we also computed an oracle AQWV value, the maximum AQWV (MQWV), obtained by sweeping over all possible decision thresholds in each one of the conditions presented, which we show in Table 3 (b). We see that all MQWV values are now non-negative, and, as expected, greater than the AQWV counterparts of Table 3 (a). The supervised method is still the best on average over all languages and conditions (it is worse than QST by 0.95% absolute on Somali Test but better than QST by 3% absolute on Swahili Test).",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 812,
"end": 819,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 931,
"end": 938,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "AQWV/MQWV Results",
"sec_num": "4.5."
},
{
"text": "In this paper, we looked at the problem of coming up with producing hard decisions in a CLIR system. One interesting application that we did not have the space to investigate in this paper is where the retrieval is done on-line, in a streaming fashion. Although there is no concept of a \"fixed\" collection in this case, one can consider a sliding window through the stream for purposes of computing various features, such as the sum of posteriors of Sections 2 and 3. We plan to investigate this problem in a future publication, as well as techniques that integrate score normalization directly into a CLIR engine (e.g., train a neural network CLIR system with the objective to optimize the ultimate measure of interest, instead of an approximate measure such as cross-entropy). Furthermore, with the right architecture, the neural network can come up with the most appropriate features for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5."
},
{
"text": "This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Air Force Research Laboratory contract number FA8650-17-C-9118. Useful discussions with other members of the Analytics and Machine Intelligence Department at Raytheon BBN Technologies are gratefully acknowledged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6."
},
{
"text": "We are using document-level granularity in this paper, although similar techniques can be used for different granularities as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For this program, each query consists of one or two English terms, each a word or short phrase. In some cases, there are fea-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "IARPA Babel program -broad agency announcement (baa)",
"authors": [],
"year": 2011,
"venue": "Bibliographical References",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bibliographical References (2011). IARPA Babel program -broad agency announce- ment (baa). https://www.iarpa.gov/index.php/research- programs/babel.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An exploration of dropout with LSTMs",
"authors": [
{
"first": "G",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng, G., Peddinti, V., Povey, D., Manohar, V., Khudan- pur, S., and Yan, Y. (2017). An exploration of dropout with LSTMs. In Proc. Interspeech.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Results of the 2006 spoken term detection evaluation",
"authors": [
{
"first": "J",
"middle": [
"G"
],
"last": "Fiscus",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ajot",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Garofolo",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fiscus, J. G., Ajot, J., and Garofolo, J. S. (2007). Results of the 2006 spoken term detection evaluation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sage: The new bbn speech processing platform",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Meermeier",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silovsk\u1ef3",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Keith",
"suffix": ""
}
],
"year": 2016,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "3022--3026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsiao, R., Meermeier, R., Ng, T., Huang, Z., Jordan, M., Kan, E., Alum\u00e4e, T., Silovsk\u1ef3, J., Hartmann, W., Keith, F., et al. (2016). Sage: The new bbn speech processing platform. In Interspeech, pages 3022-3026.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Score normalization and system combination for improved keyword spotting",
"authors": [
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ranjan",
"suffix": ""
},
{
"first": "T",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Saikumar",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bulyko",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2013,
"venue": "Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "210--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karakos, D., Schwartz, R., Tsakalidis, S., Zhang, L., Ran- jan, S., Ng, T. T., Hsiao, R., Saikumar, G., Bulyko, I., Nguyen, L., et al. (2013). Score normalization and sys- tem combination for improved keyword spotting. In Au- tomatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 210-215. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Optimizing multilingual knowledge transfer for time-delay neural networks with low-rank factorization",
"authors": [
{
"first": "F",
"middle": [],
"last": "Keith",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "M.-H",
"middle": [],
"last": "Siu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Kimball",
"suffix": ""
}
],
"year": 2015,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4924--4928",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith, F., Hartmann, W., Siu, M.-H., Ma, J., and Kim- ball, O. (2018). Optimizing multilingual knowledge transfer for time-delay neural networks with low-rank factorization. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4924-4928. IEEE. (2015). DARPA LORELEI Program -broad agency an- nouncement (baa). https://www.darpa.mil/program/low- resource-languages-for-emergent-incidents.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A novel discriminative score calibration method for keyword search",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "W.-Q",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lv, Z., Cai, M., Zhang, W.-Q., and Liu, J. (2016). A novel discriminative score calibration method for key- word search. In Interspeech.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Score combination and score normalization for spoken term detection",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mamou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "M",
"middle": [
"J F"
],
"last": "Gales",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knill",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Nolden",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pickeny",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ramabhadran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sethy",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Woodland",
"suffix": ""
}
],
"year": 2013,
"venue": "Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "8272--8276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mamou, J., Cui, J., Cui, X., Gales, M. J. F., Kingsbury, B., Knill, K., Mangu, L., Nolden, D., Pickeny, M., Ramab- hadran, B., Schl\u00fcter, R., Sethy, A., and Woodland, P. C. (2013). Score combination and score normalization for spoken term detection. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Confer- ence on, pages 8272-8276. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. and Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discriminative score normalization for keyword search decision",
"authors": [
{
"first": "V",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "N",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sivadas",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Lim",
"suffix": ""
},
{
"first": "E",
"middle": [
"S"
],
"last": "Chng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pham, V. T., Xu, H., Chen, N. F., Sivadas, S., Lim, B. P., Chng, E. S., and H., L. (2014). Discriminative score normalization for keyword search decision. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE In- ternational Conference on.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 workshop on automatic speech recognition and understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., et al. (2011). The kaldi speech recog- nition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number EPFL- CONF-192584. IEEE Signal Processing Society.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Numerical Recipes: The Art of Scientific Computing",
"authors": [
{
"first": "W",
"middle": [
"H"
],
"last": "Press",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Flanery",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flan- ery, B. P. (2007). Numerical Recipes: The Art of Scien- tific Computing. Cambridge University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Unsupervised system combination for set-based retrieval with expectation maximization",
"authors": [
{
"first": "H.-C",
"middle": [],
"last": "Shing",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Barrow",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Galu\u0161\u010d\u00e1kov\u00e1",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2019,
"venue": "CLEF-2019: Experimental IR Meets Multilinguality, Multimodality, and Interaction",
"volume": "",
"issue": "",
"pages": "191--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shing, H.-C., Barrow, J., Galu\u0161\u010d\u00e1kov\u00e1, P., Oard, D. W., and Resnik, P. (2019). Unsupervised system combination for set-based retrieval with expectation maximization. In Fabio Crestani, et al., editors, CLEF-2019: Experimen- tal IR Meets Multilinguality, Multimodality, and Interac- tion, pages 191-197, Cham. Springer International Pub- lishing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A comparison of multiple methods for rescoring keyword search lists for low resource languages",
"authors": [
{
"first": "V",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soto, V., Mangu, L., Rosenberg, A., and Hirschberg, J. (2014). A comparison of multiple methods for rescor- ing keyword search lists for low resource languages. In Interspeech.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Spoken term detection AL-BAYZIN 2014 evaluation: overview, systems, results, and discussion",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tejedor",
"suffix": ""
},
{
"first": "D",
"middle": [
"T"
],
"last": "Toledano",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lopez-Otero",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Docio-Fernandez",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Garcia-Mateo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Cardenal",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Echeverry-Correa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Coucheiro-Limeres",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Olcoz",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "EURASIP Journal on Audio",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejedor, J., Toledano, D. T., Lopez-Otero, P., Docio- Fernandez, L., Garcia-Mateo, C., Cardenal, A., Echeverry-Correa, J. D., Coucheiro-Limeres, A., Olcoz, J., and Miguel, A. (2015). Spoken term detection AL- BAYZIN 2014 evaluation: overview, systems, results, and discussion. EURASIP Journal on Audio, Speech, and Music Processing, 2015(1).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An in-depth comparison of keyword specific thresholding and sum-to-one score normalization",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Y. and Metze, F. (2014). An in-depth comparison of keyword specific thresholding and sum-to-one score normalization. In Interspeech.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Termdependent confidence for out-of-vocabulary term detection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Frankel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bell",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, D., King, S., Frankel, J., and Bell, P. (2009). Term- dependent confidence for out-of-vocabulary term detec- tion. In Interspeech.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The tao of atwv: Probing the mysteries of keyword search performance",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wegmann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Faria",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Janin",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Riedhammer",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "192--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wegmann, S., Faria, A., Janin, A., Riedhammer, K., and Morgan, N. (2013). The tao of atwv: Probing the mys- teries of keyword search performance. In 2013 IEEE Workshop on Automatic Speech Recognition and Under- standing, pages 192-197. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Data Fusion in Information Retrieval",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, S. (2012). Data Fusion in Information Retrieval. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural-network lexical translation for cross-lingual IR from text and speech",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zbib",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Deyoung",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Rivkin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019",
"volume": "",
"issue": "",
"pages": "645--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zbib, R., Zhao, L., Karakos, D., Hartmann, W., DeYoung, J., Huang, Z., Jiang, Z., Rivkin, N., Zhang, L., Schwartz, R. M., and Makhoul, J. (2019). Neural-network lexical translation for cross-lingual IR from text and speech. In Benjamin Piwowarski, et al., editors, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 645-654. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "White listing and score normalization for keyword spotting of noisy speech",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Matsoukas",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, B., Schwartz, R., Tsakalidis, S., Nguyen, L., and Matsoukas, S. (2012). White listing and score normal- ization for keyword spotting of noisy speech. In Inter- speech.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enhancing low resource keyword spotting with automatically retrieved web documents",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
}
],
"year": 2015,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "839--843",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, L., Karakos, D., Hartmann, W., Hsiao, R., Schwartz, R., and Tsakalidis, S. (2015). Enhancing low resource keyword spotting with automatically retrieved web documents. In Interspeech, pages 839-843.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Comparison of the DET curves without/with score normalization. The gray lines are contours of equal AQWV.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "is relevant to q) = P (each term w of q occurs at least once in T (d)) = w\u2208q P (w occurs at least once in T (d)) = w\u2208q 1 \u2212 P (w does not occur in T (d",
"num": null
},
"TABREF2": {
"text": "(a) AQWV results on two genres of three languages (rows) and three conditions. The best result per dataset is shown in bold. (b) Corresponding MQWV results using the oracle decision threshold per condition. ter (relative) on average over all languages and genres on the Test condition.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}