Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N12-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:37.108846Z"
},
"title": "Implicitly Intersecting Weighted Automata using Dual Decomposition *",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose an algorithm to find the best path through an intersection of arbitrarily many weighted automata, without actually performing the intersection. The algorithm is based on dual decomposition: the automata attempt to agree on a string by communicating about features of the string. We demonstrate the algorithm on the Steiner consensus string problem, both on synthetic data and on consensus decoding for speech recognition. This involves implicitly intersecting up to 100 automata. * The authors are grateful to Damianos Karakos for providing tools and data for the ASR experiments.",
"pdf_parse": {
"paper_id": "N12-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose an algorithm to find the best path through an intersection of arbitrarily many weighted automata, without actually performing the intersection. The algorithm is based on dual decomposition: the automata attempt to agree on a string by communicating about features of the string. We demonstrate the algorithm on the Steiner consensus string problem, both on synthetic data and on consensus decoding for speech recognition. This involves implicitly intersecting up to 100 automata. * The authors are grateful to Damianos Karakos for providing tools and data for the ASR experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many tasks in natural language processing involve functions that assign scores-such as logprobabilities-to candidate strings or sequences. Often such a function can be represented compactly as a weighted finite state automaton (WFSA). Finding the best-scoring string according to a WFSA is straightforward using standard best-path algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is common to construct a scoring WFSA by combining two or more simpler WFSAs, taking advantage of the closure properties of WFSAs. For example, consider noisy channel approaches to speech recognition (Pereira and Riley, 1997) or machine translation (Knight and Al-Onaizan, 1998) . Given an input f , the score of a possible English transcription or translation e is the sum of its language model score log p(e) and its channel model score log p(f | e). If each of these functions of e is represented as a WFSA, then their sum is represented as the intersection of those two WFSAs.",
"cite_spans": [
{
"start": 203,
"end": 228,
"text": "(Pereira and Riley, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 252,
"end": 281,
"text": "(Knight and Al-Onaizan, 1998)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WFSA intersection corresponds to constraint conjunction, and hence is often a mathematically natural way to specify a solution to a problem involving multiple soft constraints on a desired string. Unfortunately, the intersection may be computationally inefficient in practice. The intersection of K WFSAs having n 1 , n 2 , . . . , n K states may have n 1 \u2022n 2 \u2022 \u2022 \u2022 n K states in the worst case. 1 In this paper, we propose a more efficient method for finding the best path in an intersection without actually computing the full intersection. Our approach is based on dual decomposition, a combinatorial optimization technique that was recently introduced to the vision (Komodakis et al., 2007) and language processing communities Koo et al., 2010) . Our idea is to interrogate the several WFSAs separately, repeatedly visiting each WFSA to seek a high-scoring path in each WFSA that agrees with the current paths found in the other WSFAs. This iterative negotiation is reminiscent of message-passing algorithms (Sontag et al., 2008) , while the queries to the WFSAs are reminiscent of loss-augmented inference (Taskar et al., 2005) .",
"cite_spans": [
{
"start": 397,
"end": 398,
"text": "1",
"ref_id": null
},
{
"start": 671,
"end": 695,
"text": "(Komodakis et al., 2007)",
"ref_id": "BIBREF12"
},
{
"start": 732,
"end": 749,
"text": "Koo et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 1013,
"end": 1034,
"text": "(Sontag et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 1112,
"end": 1133,
"text": "(Taskar et al., 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We remark that a general solution whose asymptotic worst-case runtime beat that of naive intersection would have important implications for complexity theory (Karakostas et al., 2003) . Our approach is not such a solution. We have no worst-case bounds on how long dual decomposition will take to converge in our setting, and indeed it can fail to converge altogether. 2 However, when it does converge, we have a \"certificate\" that the solution is optimal.",
"cite_spans": [
{
"start": 158,
"end": 183,
"text": "(Karakostas et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dual decomposition is usually regarded as a method for finding an optimal vector in R d , subject to several constraints. However, it is not obvious how best to represent strings as vectors-they 1 Most regular expression operators combine WFSA sizes additively. It is primarily intersection and its close relative, composition, that do so multiplicatively, leading to inefficiency when two large WFSAs are combined, and to exponential blowup when many WFSAs are combined. Yet these operations are crucially important in practice.",
"cite_spans": [
{
"start": 195,
"end": 196,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 An example that oscillates can be constructed along lines similar to the one given by . have unbounded length, and furthermore the absolute position of a symbol is not usually significant in evaluating its contribution to the score. 3 One contribution of this work is that we propose a general, flexible scheme for converting strings to feature vectors on which the WFSAs must agree. In principle the number of features may be infinite, but the set of \"active\" features is expanded only as needed until the algorithm converges. Our experiments use a particular instantiation of our general scheme, based on n-gram features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We apply our method to a particular task: finding the Steiner consensus string (Gusfield, 1997 ) that has low total edit distance to a number of given, unaligned strings. As an illustration, we are pleased to report that \"alia\" and \"aian\" are the consensus popular names for girls and boys born in the U.S. in 2010. We use this technique for consensus decoding from speech recognition lattices, and to reconstruct the common source of up to 100 strings corrupted by random noise. Explicit intersection would be astronomically expensive in these cases. We demonstrate that our approach tends to converge rather quickly, and that it finds good solutions quickly in any case.",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(Gusfield, 1997",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A weighted finite state automaton (WFSA) over the finite alphabet \u03a3 is an FSA that has a cost or weight associated with each arc. We consider the case of real-valued weights in the tropical semiring. This is a fancy way of saying that the weight of a path is the sum of its arc weights, and that the weight of a string is the minimum weight of all its accepting paths (or \u221e if there are none).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Finite State Automata",
"sec_num": "2.1"
},
{
"text": "When we intersect two WFSAs F and G, the effect is to add string weights:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Finite State Automata",
"sec_num": "2.1"
},
{
"text": "(F \u2229 G)(x) = F (x) + G(x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Finite State Automata",
"sec_num": "2.1"
},
{
"text": "Our problem is to find the x that minimizes this sum, but without constructing F \u2229 G to run a shortest-path algorithm on it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Finite State Automata",
"sec_num": "2.1"
},
{
"text": "The trick in dual decomposition is to decompose an intractable global problem into two or more tractable subproblems that can be solved independently. If we can somehow combine the solutions from the subproblems into a \"valid\" solution to the global problem, then we can avoid optimizing the joint problem directly. A valid solution is one in which the individual solutions of each subproblem all agree on the variables which are shared in the joint problem. For example, if we are combining a parser with a part-of-speech tagger, the tag assignments from both models must agree in the final solution ; if we are intersecting a translation model with a language model, then it is the words that must agree (Rush and Collins, 2011) .",
"cite_spans": [
{
"start": 706,
"end": 730,
"text": "(Rush and Collins, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "More formally, suppose we want to find a global solution that is jointly optimized among K subproblems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "argmin x K k=1 f k (x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "Suppose that x ranges over vectors. Introducing an auxiliary variable x k for each subproblem f k allows us to equivalently formulate this as the following constrained optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "argmin {x,x 1 ,...,x K } K k=1 f k (x k ) s.t. (\u2200k) x k = x (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "For any set of vectors \u03bb k that sum to 0, K k=1 \u03bb k = 0, Komodakis et al. (2007) show that the following Lagrangian dual is a lower bound on (1):",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "Komodakis et al. (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "4 min {x 1 ,...,x K } K k=1 f k (x k ) + \u03bb k \u2022 x k (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "where the Lagrange multiplier vectors \u03bb k can be used to penalize solutions that do not satisfy the agreement constraints (\u2200k) x k = x. Our goal is to maximize this lower bound and hope that the result does satisfy the constraints. The graphs in Fig. 2 illustrate how we increase the lower bound over time, using a subgradient algorithm to adjust the \u03bb's. At each subgradient step, (2) can be computed by choosing each",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 252,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "x k = argmin x k f k (x k ) + \u03bb k \u2022 x k sep- arately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "In effect, each subproblem makes an independent prediction x k influenced by \u03bb k , and if these outputs do not yet satisfy the agreement constraints, then the \u03bb k are adjusted to encourage the subproblems to agree on the next iteration. See Sontag et al. (2011) for a detailed tutorial on dual decomposition.",
"cite_spans": [
{
"start": 241,
"end": 261,
"text": "Sontag et al. (2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "Given K WFSAs, F 1 , . . . , F K , we are interested in finding the string x which has the best score in the intersection F 1 \u2229 . . . \u2229 F K . The lowest-cost string in the intersection of all K machines is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin x k F k (x)",
"eq_num": "(3)"
}
],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "As explained above, the trick in dual decomposition is to recast (3) as independent problems of the form argmin x k F k (x k ), subject to constraints that all x k are the same. However, it is not so clear how to define agreement constraints on strings. Perhaps a natural formulation is that F k should be urged to favor strings x k that would be read by F k along a similar path to that of x k . But F k cannot keep track of the state of F k for all k without solving the full intersection-precisely what we are trying to avoid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "Instead of requiring the strings x k to be equal as in (1), we will require their features to be equal:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(\u2200k) \u03b3(x k ) = \u03b3(x)",
"eq_num": "(4)"
}
],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "Of course, we must define the features. We will use an infinite feature vector \u03b3(x) that completely characterizes x, so that agreement of the feature vectors implies agreement of the strings. At each subgradient step, however, we will only allow finitely many elements of \u03bb k to become nonzero, so only a finite portion of \u03b3(x k ) needs to be computed. 5 We will define these \"active\" features of a string x by constructing some unweighted deterministic FSA, G (described in \u00a74). The active features of x are determined by the collection of arcs on the accepting path of x in G. Thus, to satisfy the agreement constraint, x i and x j must be accepted using the same arcs of G (or more generally, arcs that have the same features).",
"cite_spans": [
{
"start": 353,
"end": 354,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "We relax the constraints by introducing a collection \u03bb = \u03bb 1 , . . . , \u03bb K of Lagrange multipliers, 5 The simplest scheme would define a binary feature for each string in \u03a3 * . Then the nonzero elements of \u03bb k would specify punishments and rewards for outputting various strings that had been encountered at earlier iterations: \"Try subproblem k again, and try harder not to output michael this time, as it still didn't agree with other subproblems: try jason instead.\" This scheme would converge glacially if at all. We instead focus on featurizations that let subproblems negotiate about substrings: \"Try again, avoiding mi if possible and favoring ja instead.\" and defining G \u03bb k (x) such that the features of G are weighted by the vector \u03bb k (all of whose nonzero elements must correspond to features in G). As in (2), we assume \u03bb \u2208 \u039b, where \u039b = {\u03bb : k \u03bb k = 0}. This gives the objective:",
"cite_spans": [
{
"start": 100,
"end": 101,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "h(\u03bb) = min {x 1 ,...,x K } k (F k (x k ) + G \u03bb k (x k )) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "This minimization fully decomposes into K subproblems that can be solved independently. The kth subproblem is to find argmin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "x k F k (x k ) + G \u03bb k (x k ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "which is straightforward to solve with finite-state methods. It is the string on the lowest-cost path through H k = F k \u2229 G \u03bb k , as found with standard path algorithms (Mohri, 2002) .",
"cite_spans": [
{
"start": 169,
"end": 182,
"text": "(Mohri, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "The dual problem we wish to solve is max \u03bb\u2208\u039b h(\u03bb), where h(\u03bb) itself is a min over {x 1 , . . . , x K }. We optimize \u03bb via projected subgradient ascent (Komodakis et al., 2007) . The update equation for \u03bb k at iteration t is then:",
"cite_spans": [
{
"start": 152,
"end": 176,
"text": "(Komodakis et al., 2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "\u03bb (t+1) k = \u03bb (t) k + \u03b7 t \u03b3(x (t) k ) \u2212 k \u03b3(x (t) k ) K (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "where \u03b7 t > 0 is the step size at iteration t. This update is intuitive. It moves away from the current solution and toward the average solution (where they differ), by increasing the cost of the former's features and reducing the cost of the latter's features. This update may be very dense, however, since \u03b3(x) is an infinite vector. So we usually only update the elements of \u03bb k that correspond to the small finite set of active features (the other elements are still \"frozen\" at 0), denoted \u0398. This is still a valid subgradient step. This strategy is incorrect only if the updates for all active features are 0-in other words, only if we have achieved equality of the currently active features and yet still the {x k } do not agree. In that case, we must choose some inactive features that are still unequal and allow the subgradient step to update their \u03bb coefficients to nonzero, making them active. At the next step of optimization, we must expand G to consider this enlarged set of active features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WFSAs and Dual Decomposition",
"sec_num": "3"
},
{
"text": "The agreement machine (or constraint machine) G can be thought of as a way of encoding features of strings on which we enforce agreement. There are a number of different topologies for G that might be considered, with varying degrees of efficiency and utility. Constructing G essentially amounts to feature engineering; as such, it is unlikely that there is a universally optimal topology of G. Nevertheless, there are clearly bad ways to build G, as not all topologies are guaranteed to lead to an optimal solution. In this section, we lay out some abstract guidelines for appropriate G construction, before we describe specific topologies in the later subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "Most importantly, we should design G so that it accepts all strings in F 1 \u2229 . . . \u2229 F K . This is to ensure that it accepts the string that is the optimal solution to the joint problem. If G did not accept that string, then neither would H k = F k \u2229 G, and our algorithm would not be able to find it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "Even if H k can accept the optimal string, it is possible that this string would never be the best path in this machine, regardless of \u03bb. For example, suppose G is a single-state machine with self-loops accepting each symbol in the alphabet (i.e. a unigram machine). Suppose H k outputs the string aaa in the current iteration, but we would like the machines to converge to aaaaa. We would lower the weight of \u03bb a to encourage H k to output more of the symbol a. However, if H k has a cyclic topology, then it could happen that a negative value of \u03bb a could create a negative-weight cycle, in which the lowest-cost path through H k is infinitely long. It might be that adjusting \u03bb a can change the best string to either aaa or aaaaaaaaa. . . (depending on whether a cycle after the initial aaa has positive or negative weight), but never the optimal aaaaa. On the other hand, if G instead encoded 5-grams, this would not be a problem because a path through a 5-gram machine could accept aaaaa without traversing a cycle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "Finally, agreeing on (active) features does not necessarily mean that all x k are the same string. For example, if we again use a unigram G (that is, \u0398 = \u03a3, the set of unigrams), then \u03b3 \u0398 (abc) = \u03b3 \u0398 (cba), where \u03b3 \u0398 returns a feature vector where all but the active features are zeroed out. In this instance, we satisfy the constraints imposed by G, even though we have not satisfied the constraint we truly care about: that the strings agree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "To summarize, we will aim to choose \u0398 such that G has the following characteristics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "1. The language L(F k \u2229 G) = L(F k ); i.e. G does not restrict the set of strings accepted by F k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "2. When \u03b3 \u0398 (x i ) = \u03b3 \u0398 (x j ), typically x i = x j . 3. \u2203\u03bb \u2208 \u039b s.t. argmin x F k (x) + G \u03bb k (x) = argmin x k F k (x), i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "e., the optimal string can be the best path in F k \u2229 G. 6 This may not be the case if G is cyclic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "The first of these is required during every iteration of the algorithm in order to maintain optimality guarantees. However, even if we do not satisfy the latter two points, we may get lucky and the strings themselves will agree upon convergence, and no further work is required. Furthermore, the unigram machine G used in the above examples, despite breaking these requirements, has the advantage of being very efficient to intersect with F . This motivates our \"active feature\" strategy of using a simple G initially, and incrementally altering it as needed, for example if we satisfy the constraints but the strings do not yet match. We discuss this in \u00a74.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Agreement Machine",
"sec_num": "4"
},
{
"text": "In principle, it is valid to use any G that satisfies the guidelines above, but in practice, some topologies will lead to faster convergence than others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "Perhaps the most obvious form is a simple vector encoding of strings, e.g. \"a at position 1\", \"b at position 2\", and so on. As a WFSA, this would simply have one state represent each position, with arcs for each symbol going from position i to i + 1. This is essentially a unigram machine where the loops have been \"unrolled\" to also keep track of position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "However, early experiments showed that with this topology for G, our algorithm converged very slowly, if at all. What goes wrong? The problem stems from the fact that the strings are unaligned and of varying length, and it is difficult to get the strings to agree quickly at specific positions. For example, if two subproblems have b at positions 6 and 8 in the current iteration, they might agree at position 7-but our features don't encourage this. The Lagrangian update would discourage accepting b at 6 and encourage b at 8 (and vice versa), without giving credit for meeting in the middle. Further, these features do not encourage the subproblems to preserve the relative order of neighboring symbols, and strings which are almost the same but slightly misaligned will be penalized essentially everywhere. This is an ineffective way for the subproblems to communicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "In this paper, we focus on the feature set we found to work the best in our experiments: the strings should agree on their n-gram features, such as \"number of occurrences of the bigram ab.\" Even if we don't yet know precisely where ab should appear in the string, we can still move toward convergence if we try to force the subproblems to agree on whether and how often ab appears at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "To encode n-gram features in a WFSA, each state represents the (n\u22121)-gram history, and all arcs leaving the state represent the final symbol in the ngram, weighted by the score of that n-gram. The machine will also contain start and end states, with appropriate transitions to/from the n-gram states. For example, if the trigram abc has weight \u03bb abc , then the trigram machine will encode this as an arc with the symbol c leaving the state representing ab, and this arc will have weight \u03bb abc . If our feature set also contains 1-and 2-grams, then the arc in this example would incorporate the weights of all of the corresponding features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "\u03bb abc + \u03bb bc + \u03bb c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "A drawback is that these features give no information about where in the string the n-grams should occur. In a long string, we might want to encourage or discourage an n-gram in a certain \"region\" of the string. Our features can only encourage or discourage it everywhere in the string, which may lead to slow convergence. Nevertheless, in our particular experimental settings, we find that this works better than other topologies we have considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "Sparse N-Gram Encoding A full n-gram language model requires \u2248 |\u03a3| n arcs to encode as a WFSA. This could be quite expensive. Fortunately, large n-gram models can be compacted by using failure arcs (\u03c6-arcs) to encode backoff (Allauzen et al., 2003) . These arcs act as -transitions that can be taken only when no other transition is available. They allow us to encode the sparse subset of ngrams that have nonzero Lagrangians. We encode G such that all features whose \u03bb value is 0 will back off to the next largest n-gram having nonzero weight. This form of G still accepts \u03a3 * and has the same weights as a dense representation, but could require substantially fewer states.",
"cite_spans": [
{
"start": 225,
"end": 248,
"text": "(Allauzen et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-Gram Construction of G",
"sec_num": "4.1"
},
{
"text": "As mentioned above, we may need to alter G as we go along. Intuitively, we may want to start with features that are cheap to encode, to move the parameters \u03bb to a good part of the solution space, then incrementally bring in more expensive features as needed. Shorter n-grams require a smaller G and will require a shorter runtime per iteration, but if they are too short to be informative, then they may require many more iterations to reach convergence. In an extreme case, we may reach a point where the subproblems all agree on n-grams currently in \u0398, but the actual strings still do not match. Waiting until we hit such a point may be unnecessarily slow. We experimented with periodically increasing n (e.g. adding trigrams to the feature set if we haven't converged with bigrams after a fixed number of iterations), but this is expensive, and it is not clear how to define a schedule for increasing the order of n. We instead present a simple and effective heuristic for bringing in more features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incrementally Expanding G",
"sec_num": "4.2"
},
{
"text": "The idea is that if the subproblem solutions currently disagree on counts of the bigrams ab and bc, then an abc feature may be unnecessary, since the subproblems could still make progress with only these bigram constraints. However, once the subproblems agree on these two bigrams, but disagree on trigram abc, we bring this into the feature set \u0398. More generally, we add an (n + 1)-gram to the feature set if the current strings disagree on its counts despite agreeing on its n-gram prefix and n-gram suffix (which need not necessarily be \u0398). This selectively brings in larger n-grams to target portions of the strings that may require longer context, while keeping the agreement machine small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incrementally Expanding G",
"sec_num": "4.2"
},
{
"text": "Algorithm 1 gives pseudocode for our complete algorithm when using n-gram features with this incremental strategy. To summarize, we solve for each x k using the current \u03bb k , and if all the strings agree, we return them as the optimal solution. Otherwise, we update \u03bb k and repeat. At each iteration, we check for n-gram agreement, and bring in select (n + 1)grams to the feature set as appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incrementally Expanding G",
"sec_num": "4.2"
},
{
"text": "Finally, there is another instance where we might Algorithm 1 The dual decomposition algorithm with n-gram features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incrementally Expanding G",
"sec_num": "4.2"
},
{
"text": "Initialize \u0398 to some initial set of n-gram features. for t = 1 to T do for k = 1 to K do Solve x k = argmin x (F k \u2229 G \u03bb k )(x) with a shortest-path algorithm end for if (\u2200i, j)x i = x j then return {x 1 , . . . , x K } else \u0398 = \u0398 \u222a {z \u2208 \u03a3 * : all x k agree on the features corresponding to the length-(|z| \u2212 1) prefix and suffix of z, but not on z itself} for k = 1 to K do Update \u03bb k according to equation 6Create G \u03bb k to encode the features \u0398 end for end if end for need to expand G, which we omit from the pseudocode for conciseness. If both F k and G are cyclic, then there is a chance that there will be a negativeweight cycle in F k \u2229G \u03bb k . (If at least one of these machines is acyclic, then this is not a problem, because their intersection yields a finite set.) In the case of a negative-weight cycle, the best path is infinitely long, and so the algorithm will either return an error or fail to terminate. If this happens, then we need to backtrack, and either decrease the subgradient step size to avoid moving into this territory, or alter G to expand the cycles. This can be done by unrolling loops to keep track of more information-when encoding n-gram features with G, this amounts to expanding G to encode higher order n-grams. When using a sparse G with \u03c6-arcs, it may also be necessary to increase the minimum n-gram history that is used for back-off. For example, instead of allowing bigrams to back off to unigrams, we might force G to encode the full set of bigrams (not just bigrams with nonzero \u03bb) in order to avoid cycles in the lower order states. Our strategy for avoiding negative-weight cycles is detailed in \u00a75.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incrementally Expanding G",
"sec_num": "4.2"
},
{
"text": "To best highlight the utility of our approach, we consider applications that must (implicitly) intersect a large number of WFSAs. We will demonstrate that, in many cases, our algorithm converges to an exact solution on problems involving 10, 25, and even 100 machines, all of which would be hopeless to solve by taking the full intersection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Consensus Decoding",
"sec_num": "5"
},
{
"text": "We focus on the problem of solving for the Steiner consensus string: given a set of K strings, find the string in \u03a3 * that has minimal total edit distance to all strings in the set. This is an NP-hard problem that can be solved as an intersection of K machines, as we will describe in \u00a75.2. The consensus string also gives an implicit multiple sequence alignment, as we discuss in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Consensus Decoding",
"sec_num": "5"
},
{
"text": "We begin with the application of minimum Bayes risk decoding of speech lattices, which we show can reduce to the consensus string problem. We then explore the consensus problem in depth by applying it to a variety of different inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments with Consensus Decoding",
"sec_num": "5"
},
{
"text": "We initialize \u0398 to include both unigrams and bigrams, as we find that unigrams alone are not productive features in these experiments. As we expand \u0398, we allow it to include n-grams up to length five.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "5.1"
},
{
"text": "We run our algorithm for a maximum of 1000 iterations, using a subgradient step size of \u03b1/(t + 500) at iteration t, which satisfies the general properties to guarantee asymptotic convergence (Spall, 2003) . We initialize \u03b1 to 1 and 10 in the two subsections, respectively. We halve \u03b1 whenever we hit a negativeweight cycle and need to backtrack. If we still get negative-weight cycles after \u03b1 \u2264 10 \u22124 then we reset \u03b1 and increase the minimum order of n which is encoded in G. (If n is already at our maximum of five, then we simply end without converging.) In the case of non-convergence after 1000 iterations, we select the best string (according to the objective) from the set of strings that were solutions to any subproblem at any point during optimization.",
"cite_spans": [
{
"start": 191,
"end": 204,
"text": "(Spall, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "5.1"
},
{
"text": "Our implementation uses OpenFST 1.2.8 (Allauzen et al., 2007) .",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "(Allauzen et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "5.1"
},
{
"text": "We first consider the task of automatic speech recognition (ASR). Suppose x * is the true transcription (a string) of an spoken utterance, and \u03c0(w) is an ASR system's probability distribution over possible transcriptions w. The Bayes risk of an output transcription x is defined as the expectation w \u03c0(w) (x, w) for some loss function (Bickel and Doksum, 2006) . Minimum Bayes risk decoding (Goel and Byrne, 2003) involves choosing the x that minimizes the Bayes risk, rather than simply choosing the x that maximizes \u03c0(x) as in MAP decoding.",
"cite_spans": [
{
"start": 335,
"end": 360,
"text": "(Bickel and Doksum, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 391,
"end": 413,
"text": "(Goel and Byrne, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding for ASR",
"sec_num": "5.2"
},
{
"text": "As a reasonable approximation, we will take the expectation over just the strings w 1 , . . . , w K that are most probable under \u03c0. A common loss function is the Levenshtein distance because this is generally used to measure the word error rate of ASR output. Thus, we seek a consensus transcription",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding for ASR",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmin x K k=1 \u03c0 k d(x, w k )",
"eq_num": "(7)"
}
],
"section": "Minimum Bayes Risk Decoding for ASR",
"sec_num": "5.2"
},
{
"text": "that minimizes a weighted sum of edit distances to all of the top-K strings, where high edit distance to more probable strings is more strongly penalized. Here d(x, w) is the unweighted Levenshtein distance between two strings, and \u03c0 k = \u03c0(w k ). If each \u03c0 k = 1/K, then argmin x is known as the Steiner consensus string, which is NP-hard to find (Sim and Park, 2003) . Equation 7is a weighted generalization of the Steiner problem. Given an input string w k , it is straightforward to define our WFSA F k such that F k (x) computes \u03c0 k d(x, w k ). A direct construction of F k is as follows. First, create a \"straight line\" WFSA whose single path accepts (only) w k ; each each state corresponds to a position in w k . These arcs all have cost 0. Now add various arcs with cost \u03c0 k that permit edit operations. For each arc labeled with a symbol a \u2208 \u03a3, add competing \"substitution\" arcs labeled with the other symbols in \u03a3, and a competing \"deletion\" arc labeled with ; these have the same source and target as the original arc. Also, at each state, add a self-loop labeled with each symbol in \u03a3; these are \"insertion\" arcs. Each arc that deviates from w k has a cost of \u03c0 k , and thus the lowest-cost path through F k accepting x has weight \u03c0 k d(x, w k ).",
"cite_spans": [
{
"start": 347,
"end": 367,
"text": "(Sim and Park, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding for ASR",
"sec_num": "5.2"
},
{
"text": "The consensus objective in Equation (7) can be solved by finding the lowest-cost path in F 1 \u2229 . . . \u2229 F K , and we can solve this best-path problem using the dual decomposition algorithm described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Minimum Bayes Risk Decoding for ASR",
"sec_num": "5.2"
},
{
"text": "We ran our algorithm on Broadcast News data, using 226 lattices produced by the IBM Attila decoder 0 <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> WE WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I DON'T WANT TO BE TAKING A DEEP BREATH NOW </s> <s> WELL I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> THEY WANT TO BE TAKING A DEEP BREATH NOW </s> 300 <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> WE WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I DON'T WANT TO BE TAKING A DEEP BREATH NOW </s> <s> WELL I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> WELL WANT TO BE TAKING A DEEP BREATH NOW </s> 375 <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I DON'T WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> 472 <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> <s> I WANT TO BE TAKING A DEEP BREATH NOW </s> ( Chen et al., 2006; Soltau et al., 2010) on a subset of the NIST dev04f data, using models trained by Zweig et al. (2011) . For each lattice, we found the consensus of the top K = 25 strings. 85% of the problems converged within 1000 iterations, with an average of 147.4 iterations. We found that the true consensus was often the most likely string under \u03c0, but not always-this was true 70% of the time. In the Bayes risk objective we are optimizing in equation 7-the expected loss-our approach averaged a score of 1.59, while always taking the top string gives only a slightly worse average of 1.66. 8% of the problems encountered negativeweight cycles, which were all resolved either by decreasing the step size or encoding larger n-grams.",
"cite_spans": [
{
"start": 1091,
"end": 1109,
"text": "Chen et al., 2006;",
"ref_id": "BIBREF4"
},
{
"start": 1110,
"end": 1130,
"text": "Soltau et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 1192,
"end": 1211,
"text": "Zweig et al. (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.2.1"
},
{
"text": "The above experiments demonstrate that we can exactly find the best path in the intersection of 25 machines-an intersection that could not feasibly be constructed in practice. However, these experiments do not exhaustively explore how dual decomposition behaves on the Steiner string problem in general. Above, we experimented with only a fixed number of input strings, which were generally similar to one another. There are a variety of other inputs to the consensus problem which might lead to different behavior and convergence results, however. If we were to instead run this experiment on DNA sequences (for example, if we posit that the strings are all mutations of the same ancestor), the alphabet {A,T,C,G} is so small that n-grams are likely to be repeated in many parts of the strings, and the lack of position information in our features could make it hard to reach agreement. Another interesting case is when the input strings have little or nothing in common-can we still converge to an optimal consensus in a reasonable number of iterations?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "We can investigate many different cases by creating synthetic data, where we tune the number of input strings K, the length of the strings, the size of the vocabulary |\u03a3|, as well as how similar the strings are. We do this by randomly generating a base string x * \u2208 \u03a3 of length . We then generate K random strings w 1 , . . . , w K , each by passing x * through a noisy edit channel, where each position has independent probability \u00b5 of making an edit. For each position in x * , we uniformly sample once among the three types of edits (substitution, insertion, deletion), and in the case of the first two, we uniformly sample from the vocabulary (excluding the current symbol for substitution). The larger \u00b5, the more mutated the strings will be. For small \u00b5 or large K, the optimal consensus of w 1 , . . . , w K will usually be x * . Table 1 shows results under various settings. Each line presents the percentage of 100 examples that converge within the iteration limit, the average number of iterations to convergence (\u00b1 standard deviation) for those that converged, and the reduction in the objective value that is obtained over a simple baseline of choosing the best string in the input set, to show how much progress the algorithm makes between the 0th and final iteration.",
"cite_spans": [],
"ref_spans": [
{
"start": 837,
"end": 844,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "As expected, a higher mutation probability slows convergence in all cases, as does having longer input strings. These results also confirm our hypothesis that a small alphabet would lead to slow convergence when using small n-gram features. For these types of strings, which might show up in biological data, one would likely need more informative constraints than position-agnostic n-grams. Figure 2 shows example runs on problems generated at three different parameter settings. We plot the objective value as a function of runtime, showing both the primal objective (3) that we hope to minimize, which we measure as the quality of the best solution among the {x k } that are output at the cur-rent iteration, and the dual objective (5) that our algorithm is maximizing. The dual problem (which is concave in \u03bb) lower bounds the primal. If the two functions ever touch, we know the solution to the dual problem is in the set of feasible solutions to the original primal problem we are attempting to solve, and indeed must be optimal. The figure shows that the dual function always has an initial value of 0, since we initialize each \u03bb k = 0, and then F k will simply return the input w k as its best solution (since w k has zero distance to itself). As the algorithm begins to enforce the agreement constraints, the value of the relaxed dual problem gradually worsens, until it fully satisfies the constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "These plots indicate the number of iterations that have passed and the number of active features. We see that the time per iteration increases as the number of features increases, as expected, because more (and longer) n-grams are being encoded by G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "The three patterns shown are typical of almost all the trials we examined. When the solution is in the original input set (a likely occurrence for large K or small \u00b5 \u2022 ), the primal value will be optimal from the start, and our algorithm only has to prove its optimality. For more challenging problems, the primal solution may jump around in quality at each iteration before settling into a stable part of the space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "To investigate how different n-gram sizes affect convergence rates, we experiment with using the entire set of n-grams (for a fixed n) for the duration of the optimization procedure. Figure 3 shows convergence rates (based on both iterations and runtime) of different values of n for one set of parameters. While bigrams are very fast (average runtime of 14s among those that converged), this converged within 1000 iterations only 78% of the time, and the remaining 22% end up bringing down the average speed (with an overall average runtime over a minute). All larger n-grams converged every time; trigrams had an average runtime of 32s. Our algorithm, which begins with bigrams but brings in more features (up to 5-grams) as needed, had an average runtime of 19s (with 98% convergence). ",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 191,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "K =10, =15,|\u03a3| =20,\u00b5 =0.2 n =2 n =3 n =4 n =5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "Figure 3: Convergence rates for a fixed set of n-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "timality. Even in instances where approximate algorithms perform well, it could be useful to have a true optimality guarantee. For example, our algorithm can be used to produce reference solutions, which are important to have for research purposes. Under a sum-of-pairs Levenshtein objective, the exact multi-sequence alignment can be directly obtained from the Steiner consensus string and vice versa (Gusfield, 1997) . This implies that our exact algorithm could be also used to find exact multisequence alignments, an important problem in natural language processing (Barzilay and Lee, 2003) and computational biology (Durbin et al., 2006) that is almost always solved with approximate methods.",
"cite_spans": [
{
"start": 402,
"end": 418,
"text": "(Gusfield, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 570,
"end": 594,
"text": "(Barzilay and Lee, 2003)",
"ref_id": "BIBREF2"
},
{
"start": 621,
"end": 642,
"text": "(Durbin et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "We have noted that some constraints are more useful than others. Position-specific information is hard to agree on and leads to slow convergence, while pure n-gram constraints do not work as well for long strings where the position may be important. One avenue we are investigating is the use of a non-deterministic G, which would allow us to encode latent variables (Dreyer et al., 2008) , such as loosely defined \"regions\" within a string, and to allow for the encoding of alignments between the input strings. We would also like to extend these methods to other combinatorial optimization problems involving strings, such as inference in graphical models over strings (Dreyer and Eisner, 2009) .",
"cite_spans": [
{
"start": 367,
"end": 388,
"text": "(Dreyer et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 671,
"end": 696,
"text": "(Dreyer and Eisner, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "To conclude, we have presented a general framework for applying dual decomposition to implicit WFSA intersection. This could be applied to a number of NLP problems such as language model and lattice intersection. To demonstrate its utility on a large number of automata, we applied it to consensus decoding, determining the true optimum in a reasonable amount of time on a large majority of cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigating Consensus Performance with Synthetic Data",
"sec_num": "5.3"
},
{
"text": "Such difficulties are typical when trying to apply structured prediction or optimization techniques to predict linguistic objects such as strings or trees, rather than vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The objective in (2) can always be made as small as in (1) by choosing the vectors (x1, . . . xK ) that minimize (1) (because thenP k \u03bb k \u2022 x k = P k \u03bb k \u2022 x = 0 \u2022 x = 0). Hence (2) \u2264 (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is not always possible to construct a G to satisfy this property, as the Lagrangian dual may not be a tight bound to the original problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalized algorithms for constructing statistical language models",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "40--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical lan- guage models. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguis- tics, pages 40-47.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "OpenFst: A general and efficient weighted finite-state transducer library",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Schalkwyk",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 12th International Conference on Implementation and Application of Automata, CIAA'07",
"volume": "",
"issue": "",
"pages": "11--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Allauzen, Michael Riley, Johan Schalkwyk, Woj- ciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the 12th International Con- ference on Implementation and Application of Au- tomata, CIAA'07, pages 11-23.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning to paraphrase: An unsupervised approach using multiplesequence alignment",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In Proceedings of the 2003 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics on Human Lan- guage Technology -Volume 1, NAACL '03, pages 16- 23.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mathematical Statistics: Basic Ideas and Selected Topics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Kjell",
"middle": [
"A"
],
"last": "Bickel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doksum",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter J. Bickel and Kjell A. Doksum. 2006. Mathemat- ical Statistics: Basic Ideas and Selected Topics, vol- ume 1. Pearson Prentice Hall.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Advances in speech transcription at IBM under the DARPA EARS program",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lidia",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "Saon",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Soltau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Transactions on Audio, Speech & Language Processing",
"volume": "14",
"issue": "5",
"pages": "1596--1608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen, Brian Kingsbury, Lidia Mangu, Daniel Povey, George Saon, Hagen Soltau, and Geoffrey Zweig. 2006. Advances in speech transcription at IBM under the DARPA EARS program. IEEE Trans- actions on Audio, Speech & Language Processing, 14(5):1596-1608.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Graphical models over multiple strings",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP '09",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Jason Eisner. 2009. Graphical mod- els over multiple strings. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP '09, pages 101-110. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1080--1089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Jason R. Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1080-1089, Honolulu, October.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Biological Sequence Analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Durbin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Eddy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Krogh",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Mitchison",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. 2006. Biological Sequence Analysis. Cambridge University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Minimum Bayes risk methods in automatic speech recognition",
"authors": [
{
"first": "Vaibhava",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "William",
"middle": [
"J"
],
"last": "Byrne",
"suffix": ""
}
],
"year": 2003,
"venue": "Pattern Recognition in Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaibhava Goel and William J. Byrne. 2003. Mini- mum Bayes risk methods in automatic speech recog- nition. In Wu Chou and Biing-Hwang Juan, editors, Pattern Recognition in Speech and Language Process- ing. CRC Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gusfield",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Bi- ology. Cambridge University Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the complexity of intersecting finite state automata and NL versus NP",
"authors": [
{
"first": "George",
"middle": [],
"last": "Karakostas",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Anastasios",
"middle": [],
"last": "Lipton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Viglas",
"suffix": ""
}
],
"year": 2003,
"venue": "Theoretical Computer Science",
"volume": "",
"issue": "",
"pages": "257--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Karakostas, Richard J Lipton, and Anastasios Vi- glas. 2003. On the complexity of intersecting finite state automata and NL versus NP. Theoretical Com- puter Science, pages 257-274.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Translation with finite-state devices",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 1998,
"venue": "AMTA'98",
"volume": "",
"issue": "",
"pages": "421--437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight and Yaser Al-Onaizan. 1998. Translation with finite-state devices. In AMTA'98, pages 421-437.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "MRF optimization via dual decomposition: Message-Passing revisited. In Computer Vision",
"authors": [
{
"first": "N",
"middle": [],
"last": "Komodakis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Paragios",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tziritas",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE 11th International Conference on",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Message- Passing revisited. In Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pages 1-8.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dual decomposition for parsing with non-projective head automata",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1288--1298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decompo- sition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1288-1298.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semiring frameworks and algorithms for shortest-distance problems",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2002,
"venue": "J. Autom. Lang. Comb",
"volume": "7",
"issue": "",
"pages": "321--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 2002. Semiring frameworks and algo- rithms for shortest-distance problems. J. Autom. Lang. Comb., 7:321-350, January.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speech recognition by composition of weighted finite automata",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Fernando",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando C. N. Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite au- tomata. CoRR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exact decoding of syntactic translation models through Lagrangian relaxation",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "72--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush and Michael Collins. 2011. Exact decoding of syntactic translation models through La- grangian relaxation. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1, HLT '11, pages 72-82.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On dual decomposition and linear programming relaxations for natural language processing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1-11.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The consensus string problem for a metric is NP-complete",
"authors": [
{
"first": "Seop",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Kunsoo",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Park",
"suffix": ""
}
],
"year": 2003,
"venue": "J. of Discrete Algorithms",
"volume": "1",
"issue": "",
"pages": "111--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeong Seop Sim and Kunsoo Park. 2003. The consen- sus string problem for a metric is NP-complete. J. of Discrete Algorithms, 1:111-117, February.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The IBM Attila speech recognition toolkit",
"authors": [
{
"first": "H",
"middle": [],
"last": "Soltau",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Saon",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. IEEE Workshop on Spoken Language Technology",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Soltau, G. Saon, and B. Kingsbury. 2010. The IBM Attila speech recognition toolkit. In Proc. IEEE Work- shop on Spoken Language Technology, pages 97-102.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tightening LP relaxations for MAP using message-passing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Talya",
"middle": [],
"last": "Meltzer",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2008,
"venue": "24th Conference in Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "503--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Sontag, Talya Meltzer, Amir Globerson, Yair Weiss, and Tommi Jaakkola. 2008. Tightening LP relaxations for MAP using message-passing. In 24th Conference in Uncertainty in Artificial Intelligence, pages 503-510. AUAI Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to dual decomposition for inference",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2011,
"venue": "Optimization for Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Sontag, Amir Globerson, and Tommi Jaakkola. 2011. Introduction to dual decomposition for in- ference. In Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright, editors, Optimization for Machine Learning. MIT Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to Stochastic Search and Optimization",
"authors": [
{
"first": "James",
"middle": [
"C"
],
"last": "Spall",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James C. Spall. 2003. Introduction to Stochastic Search and Optimization. John Wiley & Sons, Inc., New York, NY, USA, 1 edition.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning structured prediction models: A large margin approach",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Vassil",
"middle": [],
"last": "Chatalbashev",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd international conference on Machine learning, ICML '05",
"volume": "",
"issue": "",
"pages": "896--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd international conference on Machine learn- ing, ICML '05, pages 896-903.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Speech recognition with segmental conditional random fields: A summary of the JHU CLSP 2010 Summer Workshop",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Van Compernolle",
"suffix": ""
},
{
"first": "Kris",
"middle": [],
"last": "Demuynck",
"suffix": ""
},
{
"first": "Les",
"middle": [
"E"
],
"last": "Atlas",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Sell",
"suffix": ""
},
{
"first": "Meihong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Hynek",
"middle": [],
"last": "Hermansky",
"suffix": ""
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "Aren",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "G",
"middle": [
"S V S"
],
"last": "Sivaram",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Justine",
"middle": [
"T"
],
"last": "Kao",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Zweig, Patrick Nguyen, Dirk Van Compernolle, Kris Demuynck, Les E. Atlas, Pascal Clark, Gregory Sell, Meihong Wang, Fei Sha, Hynek Hermansky, Damianos Karakos, Aren Jansen, Samuel Thomas, Sivaram G. S. V. S., Sam Bowman, and Justine T. Kao. 2011. Speech recognition with segmental conditional random fields: A summary of the JHU CLSP 2010 Summer Workshop. In ICASSP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example run of the consensus problem on K = 25 strings on a Broadcast News utterance, showing x 1 , . . . , x 5 at the 0th, 300th, 375th, and 472nd iterations.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "The algorithm's behavior on three specific consensus problems. The curves show the current values of the primal bound (based on the best string at the current iteration) and dual bound h(\u03bb). The horizontal axis shows runtime. Red upper triangles are placed every 10 iterations, while blue lower triangles are placed for every 10% increase in the size of the feature set \u0398.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Discussion and Future WorkAn important (and motivating) property of Lagrangian relaxation methods is the certificate of op-",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "A summary of results for various consensus problems, as described in \u00a75.3.",
"type_str": "table",
"html": null
}
}
}
}